How to Install AWS JAVA SDK in Ubuntu 16.04


In this post, I write the installation for AWS JAVA SDK. I followed this documentation 


1. Install the latest Oracle JAVA JDK.


I need to download "Oracle JAVA JDK" from here. I will install the JDK. JDK includes the JRE. In my case, I create "javase" directory where I download and extract downloaded file on. And I used "wget" command with options "--no-check-certificate" to obtain from download link.


# mkdir javase

# cd javase

wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/10.0.2+13/19aef61b38124481863b1413dce1855f/jdk-10.0.2_linux-x64_bin.tar.gz 


# tar xvfz jdk-10.0.2_linux-x64_bin.tar.gz


The extracted files are not for installation. They are the files to run and use. Therefore, I need to edit my ".bashrc". In my case, "java" and "javac" files are located in "/home/ubuntu/javase/jdk-10.0.2/bin". 


cat /home/ubuntu/.bashrc

# export PATH=$PATH:/home/ubuntu/javase/jdk-10.0.2/bin

source /home/ubuntu/.bashrc


Now, I can use the Oracle JDK. "java --version" show the version information.

 

# java --version

java 10.0.2 2018-07-17

Java(TM) SE Runtime Environment 18.3 (build 10.0.2+13)

Java HotSpot(TM) 64-Bit Server VM 18.3 (build 10.0.2+13, mixed mode)


And I also create sample code, "sample.java", in different directory. In my case, I create another directory which name is "projects". I write like below.   


# mkdir -p /home/ubuntu/projects

# vi Sample.java

package com.sample;


class Sample {

     public static void main(String[] args){

           System.out.println("Hello, JAVA");

     }

}


And compile and run the sample code, "javac -d . Sample.java". I used option "-d" because I defined the package in the sample code. After compile, I can confirm "Sample.class" file is located on that directory. And Run it.


# javac -d . Sample.java

# ls com/sample/Sample.class

com/sample/Sample.class


# java com/sample/Sample

Hello, JAVA


Now, I can do JAVA programming.


2. Set up for the AWS SDK.


I still can not use the AWS SDK, now. In this part, I will follow some instruction to setup for the AWS SDK. In this documentation, I need to choose builder, In my case I select "Apache Maven". To use the SDK with Apache Maven, I need to have Maven installed.


2-1. Download, Install and Run Maven


I follow in this documentation. I make directory for Maven, And Download and extract it.


# mkdir apache-mavena

# cd apache-mavena

# wget http://apache.tt.co.kr/maven/maven-3/3.5.4/binaries/apache-maven-3.5.4-bin.tar.gz

tar xvfz apache-maven-3.5.4-bin.tar.gz

 

In my case, "bin" is located in "/home/ubuntu/apache-maven/apache-maven-3.5.4/bin". I will add this path in my ".bashrc" file.


cat /home/ubuntu/.bashrc

export PATH=$PATH:~/.local/bin:/home/ubuntu/javase/jdk-10.0.2/bin:/home/ubuntu/apache-maven/apache-maven-3.5.4/bin

source /home/ubuntu/.bashrc


Now, I can confirm the version for marven with "mvn -v" command


# mvn -v

Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z)

Maven home: /home/ubuntu/apache-maven/apache-maven-3.5.4

Java version: 10.0.2, vendor: Oracle Corporation, runtime: /home/ubuntu/javase/jdk-10.0.2

Default locale: en_US, platform encoding: UTF-8

OS name: "linux", version: "4.4.0-1066-aws", arch: "amd64", family: "unix"


I think I prepare to use this. I will follow the rest of the instructions. 


2-2. Create a new Maven package


In this documentation, it shows Maven Archetype. I think this is similar with "create projects" of other program languages. 


# cd projects/

# mvn -B archetype:generate \

  -DarchetypeGroupId=org.apache.maven.archetypes \

  -DgroupId=com.sample.myapps \

  -DartifactId=app


[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 54.510 s
[INFO] Finished at: 2018-09-12T21:02:15Z
[INFO] ------------------------------------------------------------------------

# ls -la app/
total 16
drwxr-xr-x 3 root root 4096 Sep 12 21:02 .
drwxr-xr-x 3 root root 4096 Sep 12 21:02 ..
-rw-r--r-- 1 root root  636 Sep 12 21:02 pom.xml
drwxr-xr-x 4 root root 4096 Sep 12 21:02 src


After run command above, I can see the "app" directory. In this directory, "pom.xml" and "src" directory are located. And I also can confirm there is "App.java" file under "/src/main/java/com/sample/myapps". To use the AWS SDK for Java in my project, I need to declare it in "pom.xml" file. In my case, I want to entire AWS SDK. Therefore I will add like below. (I can see the latest version in this document


# vi app/pom.xml

  <dependencies>

    <dependency>

      ....................

    </dependency>

    <dependency>

      <groupId>com.amazonaws</groupId>

      <artifactId>aws-java-sdk</artifactId>

      <version>1.11.407</version>

    </dependency>

  </dependencies>

</project>

ckag


Now, I configure to use AWS entire SDK. I can build if my configuration is correct or not with "mvn package". This command is for building the project.


# cd app/

# mvn package


After run command, I can see the AWS SDK downloaded. Please, note, you can meet some error "Source option 5 is no longer supported. Use 6 or later." and build fail. I solved this issue with adding 


# vi pom.xml

  <properties>

    <maven.compiler.source>1.6</maven.compiler.source>

    <maven.compiler.target>1.6</maven.compiler.target>

  </properties>


Now. Let's try to make some code. However, I have some question. "where is the main function to start coding?". In my case, main file is located in "./app/src/main/java/com/sample/myapps/App.java". Look at the this file.

vi app/src/main/java/com/sample/myapps/App.java

package com.sample.myapps;


/**

 * Hello world!

 *

 */

public class App

{

    public static void main( String[] args )

    {

        System.out.println( "Hello World!" );

    }

}



I have already built with this source. I can see the "class" file in the "/target/classes".

ls app/target/classes/com/sample/myapps/App.class

app/target/classes/com/sample/myapps/App.class


Thus, I can run this file. But I do not know how it is possible with command. 

mvn exec:java -Dexec.mainClass=com.sample.myapps.App


INFO] --- exec-maven-plugin:1.2.1:java (default-cli) @ app ---

Hello World!

[INFO] ------------------------------------------------------------------------

[INFO] BUILD SUCCESS


Look!, I get the result. It's works now. If you meet some error, please do mvn with "clean" and "compile" again.

mvn clean

# mvn compile


I used "mvn exec:java -Dexec.mainClass=com.sample.myapps.App" command with some option. It is not simple to re-write. Therefore, I can add some plugin in the pom.xml. I add "<build><plugins></plugins></build>"

# cat pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

  <modelVersion>4.0.0</modelVersion>

  <groupId>com.sample.myapps</groupId>

  <artifactId>app</artifactId>

  <packaging>jar</packaging>

  <version>1.0-SNAPSHOT</version>

  <name>app</name>

  <url>http://maven.apache.org</url>

  <properties>

    <maven.compiler.source>1.6</maven.compiler.source>

    <maven.compiler.target>1.6</maven.compiler.target>

  </properties>

  <dependencies>

    ........................................................

  </dependencies>

  <build>

    <plugins>

    <plugin>

        <groupId>org.codehaus.mojo</groupId>

        <artifactId>exec-maven-plugin</artifactId>

        <version>1.2.1</version>

        <executions>

            <execution>

                <goals>

                    <goal>java</goal>

                </goals>

            </execution>

        </executions>

        <configuration>

            <mainClass>com.sample.myapps.App</mainClass>

        </configuration>

    </plugin>

    </plugins>

  </build>

</project>


Now I can run without the option.

mvn exec:java



In this documentation, this shows 5 ways to obtain the credential. 

1. Environment variables–AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. 
2. Java system properties–aws.accessKeyId and aws.secretKey. 
3. The default credential profiles file– typically located at ~/.aws/credentials 
4. Amazon ECS container credentials
5. Instance profile credentials

These method has different class each to use. In this documentation, it shows how to setup for 1 and 3 above. I will use the "S3 list bucket" as sample code for this test. The sameple code is here. I copied and pasted in App.java.

vi src/main/java/com/sample/myapps/App.java

package com.sample.myapps;


import com.amazonaws.services.s3.AmazonS3;

import com.amazonaws.services.s3.AmazonS3ClientBuilder;

import com.amazonaws.services.s3.model.Bucket;

import java.util.List;


public class App

{

    public static void main( String[] args )

    {

        final AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient();

        List<Bucket> buckets = s3.listBuckets();

        System.out.println("Your Amazon S3 buckets are:");

        for (Bucket b : buckets) {

           System.out.println("* " + b.getName());

        }

    }

}


# mvn clean compile

# mvn exec:java


Someone can get some error like below. There are several reasons. One of reasons is "credential" issue. Thus, my application does not get the correct credential information.

"InvocationTargetException: Failed to parse XML document with handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListAllMyBucketsHandler: The entity "ntilde" was referenced, but not declared."

Look at this documentation. I think my sample code try to use environment variable. So I need to export some parameters like below.

export AWS_ACCESS_KEY_ID=xxxxxx

# export AWS_SECRET_ACCESS_KEY=xxxx

# export AWS_REGION=ap-northeast-2


After add this values, run "mvn exec:java". I can get finally the results.

mvn exec:java

Your Amazon S3 buckets are:

* cf-templates-16hzoxi2ozb3b-ap-northeast-1


At this time, I try to run with "~/.aws/credential" and "~/.aws/config". I add "[s3with]" on each like below. To use this profile, I need to export some values. I can see the some error messages like "InvocationTargetException: profile file cannot be null ".

# vi ~/.aws/credentials

[default]

aws_access_key_id=xxxxx

aws_secret_access_key=xxxxxxx


[s3with]

aws_access_key_id=yyyyyy

aws_secret_access_key=yyyyy


# vi ~/.aws/config

[default]

region=zzzzz


[s3with]

region=ap-northeast-2


# export AWS_PROFILE=s3with

# export AWS_CREDENTIAL_PROFILES_FILE=/home/ubuntu/.aws/credentials


Now, I need revise sample code. Please, note, I import more, "import com.amazonaws.auth.profile.ProfileCredentialsProvider;" and "import com.amazonaws.regions.Regions;".

## cat src/main/java/com/sample/myapps/App.java

package com.sample.myapps;


import com.amazonaws.services.s3.AmazonS3;

import com.amazonaws.services.s3.AmazonS3ClientBuilder;

import com.amazonaws.services.s3.model.Bucket;

import com.amazonaws.auth.profile.ProfileCredentialsProvider;

import com.amazonaws.regions.Regions;

import java.util.List;


public class App

{

    public static void main( String[] args )

    {

        final AmazonS3 s3 = AmazonS3ClientBuilder.standard()

                        .withRegion(Regions.AP_NORTHEAST_2)

                        .withCredentials(new ProfileCredentialsProvider("s3with"))

                        .build();

        List<Bucket> buckets = s3.listBuckets();

        System.out.println("Your Amazon S3 buckets are:");

        for (Bucket b : buckets) {

           System.out.println("* " + b.getName());

        }

    }

}


Now, Also I can get the results again. Another method is insert the credential into the code with "BasicAWSCredentials" and "AWSStaticCredentialsProvider"

import com.amazonaws.auth.BasicAWSCredentials;

import com.amazonaws.auth.AWSStaticCredentialsProvider;


And the sample code should be re-write like below.

# cat src/main/java/com/sample/myapps/App.java

package com.sample.myapps;


import com.amazonaws.services.s3.AmazonS3;

import com.amazonaws.services.s3.AmazonS3ClientBuilder;

import com.amazonaws.services.s3.model.Bucket;

import com.amazonaws.auth.BasicAWSCredentials;

import com.amazonaws.auth.AWSStaticCredentialsProvider;

import java.util.List;


public class App

{

    public static void main( String[] args )

    {

        BasicAWSCredentials awsCreds = new BasicAWSCredentials("xxxxx", "xxxxx");

        final AmazonS3 s3 = AmazonS3ClientBuilder.standard()

                        .withRegion(Regions.AP_NORTHEAST_2)

                        .withCredentials(new AWSStaticCredentialsProvider(awsCreds))

                        .build();

        List<Bucket> buckets = s3.listBuckets();

        System.out.println("Your Amazon S3 buckets are:");

        for (Bucket b : buckets) {

           System.out.println("* " + b.getName());

        }

    }

}


So far, I use profile for Credentials. And I use the static method for Region. I am some question for this. Is there anyway to use the profile for this Region field. "AwsProfileRegionProvider;" class offer it.


## cat src/main/java/com/sample/myapps/App.java

package com.sample.myapps;


import com.amazonaws.services.s3.AmazonS3;

import com.amazonaws.services.s3.AmazonS3ClientBuilder;

import com.amazonaws.services.s3.model.Bucket;


import com.amazonaws.auth.profile.ProfileCredentialsProvider;

import com.amazonaws.regions.AwsProfileRegionProvider;

import java.util.List;


public class App

{

    public static void main( String[] args )

    {


        AwsProfileRegionProvider r = new AwsProfileRegionProvider("s3with");

        String rs = r.getRegion();


        final AmazonS3 s3 = AmazonS3ClientBuilder.standard()

                        .withRegion(rs)

                        .withCredentials(new ProfileCredentialsProvider("s3with"))

                        .build();

        List<Bucket> buckets = s3.listBuckets();

        System.out.println("Your Amazon S3 buckets are:");

        for (Bucket b : buckets) {

           System.out.println("* " + b.getName());

        }


    }

}


# mvn clean compile

# mvn exec:java


So, Now, I can get the information about credential and reason with the profile. Please, note, I have define the information on "credentials" and "config". I need to add some Environment like below.


export AWS_CONFIG_FILE=/home/ubuntu/.aws/config 

# export AWS_CREDENTIAL_PROFILES_FILE=/home/ubuntu/.aws/credentials


If I am necessary, I can add these over my ".bashrc" file. Now, Enjoy the JAVA coding.



Reference 


[ 1 ] https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-install.html

[ 2 ] https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-project-maven.html

[ 3 ] https://maven.apache.org/index.html

[ 4 ] http://maven.apache.org/archetypes/maven-archetype-quickstart/

[ 5 ] https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-bom

[ 6 ] https://github.com/spring-guides/gs-maven/issues/21

[ 7 ] https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html

[ 8 ] https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-credentials.html

[ 9 ] https://stackoverflow.com/questions/19850956/maven-java-the-parameters-mainclass-for-goal-org-codehaus-mojoexec-maven-p

[ 10 ] https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/index.html?com/amazonaws/auth/profile/ProfileCredentialsProvider.html

How to Upload Object into S3 with multipart-upload using S3 API CLI


In this post, I will reproduce how to upload object with multipart. I can use the "awscli s3api" command for this. Before I study, I though AWS did all of process such as split, upload and resemble. However, it is not. (I am not sure if SDK can do all of these). I will write from split to complete multipart-upload.


1. Split object


Before upload object, I need to split object by the size which I want to upload. In my case, I define almost 40MB. And "split" command is used to split object by specific size. "split -b 40000000 sample/TWICE「BDZ」Music\ Video.mp4 splited" means that the "sample/TWICE「BDZ」Music\ Video.mp4" file will be splite by the size and the splited file names are prefixed with "splited". Therefore, I can see the 2 file which splited.


# split -b 40000000 sample/TWICE「BDZ」Music\ Video.mp4 splited


# ls -la spliteda*

-rw-r--r-- 1 root root 40000000 Sep 11 09:52 splitedaa

-rw-r--r-- 1 root root 21067922 Sep 11 09:52 splitedab


I will upload these files with multipart-upload.


2. Create Multipart-Upload


There are several stpes to use multipart-upload. At first, I create multi-part upload. This step will notice to S3 that I have something with large size to upload.


# aws s3api create-multipart-upload --bucket s3-multistore-bucket --key 'TWICE「BDZ」Music Video.mp4'

{

    "Bucket": "s3-multistore-bucket",

    "UploadId": "j.kLRorRoj1WEDV9iH1jrIeee5KETBL3raUH5odcycSxsl0RZY4p9Q.WK04lL9c7tsUxmDXwGEHwVlgm_MR.La4IHkM1M5xNQrCXossn5L_nXKJ0v.9_B3mNbL8GSoE8WBToEMPfELYF3VPh3g5PHg--",

    "Key": "TWICE「BDZ」Music Video.mp4"

}


After run this command, I can get "UploadId". Please note this value. This value will be used for next steps.


3. Upload-part


So far, any file is not uploaded. I need to upload these splited files with upload-part command. In this command, there are 2 options which I need to focus. They are "upload-id" and "part-number". "upload-id" is kind of the credential. "part-number" is the sequential value to assemble at end of process.


# aws s3api upload-part --bucket s3-multistore-bucket --key 'TWICE「BDZ」Music Video.mp4' --part-number 1 --body splitedaa --upload-id  "j.kLRorRoj1WEDV9iH1jrIeee5KETBL3raUH5odcycSxsl0RZY4p9Q.WK04lL9c7tsUxmDXwGEHwVlgm_MR.La4IHkM1M5xNQrCXossn5L_nXKJ0v.9_B3mNbL8GSoE8WBToEMPfELYF3VPh3g5PHg--"

{

    "ETag": "\"45de77ba3c71c3105a358971b7464b5a\""

}

root@ip-172-22-1-179:~# aws s3api upload-part --bucket s3-multistore-bucket --key 'TWICE「BDZ」Music Video.mp4' --part-number 2 --body splitedab --upload-id  "j.kLRorRoj1WEDV9iH1jrIeee5KETBL3raUH5odcycSxsl0RZY4p9Q.WK04lL9c7tsUxmDXwGEHwVlgm_MR.La4IHkM1M5xNQrCXossn5L_nXKJ0v.9_B3mNbL8GSoE8WBToEMPfELYF3VPh3g5PHg--"

{

    "ETag": "\"7fa82dacd6f59e523e60058660e1ab6f\""

}


I uploaded 2 files into S3. However, I can not see the S3 GUI. Because it is not completed object yet. So I need to finish this process.


4. Create "multipart" file


I have some questions. How does AWS know the order to assemble to make single object. I have already known there is part-number. However, this information is only used to mark this object. AWS can not know what is the begining and last. Because of this, I need to create "multipart" file to define single object. In this manual, there is format for this file.


# vi multipart.file

{

 "Parts":[

   {

      "ETag": "\"45de77ba3c71c3105a358971b7464b5a\"",

      "PartNumber" : 1

   },

   {

      "ETag": "\"7fa82dacd6f59e523e60058660e1ab6f\"",

      "PartNumber" : 2

   }

 ]

}


Now, I am ready to finish these multipart upload.


5. Complete multipart upload


I will run command "complet-multipart-upload" to finish all of process.


# aws s3api complete-multipart-upload --multipart-upload file://multipart.file --bucket s3-multistore-bucket --key 'TWICE「BDZ」Music Video.mp4' --upload-id j.kLRorRoj1WEDV9iH1jrIeee5KETBL3raUH5odcycSxsl0RZY4p9Q.WK04lL9c7tsUxmDXwGEHwVlgm_MR.La4IHkM1M5xNQrCXossn5L_nXKJ0v.9_B3mNbL8GSoE8WBToEMPfELYF3VPh3g5PHg--

{

    "ETag": "\"a1a27863cd20501d3e564e10b461d99f-2\"",

    "Bucket": "s3-multistore-bucket",

    "Location": "https://s3-multistore-bucket.s3.ap-northeast-2.amazonaws.com/TWICE%E3%80%8CBDZ%E3%80%8DMusic+Video.mp4",

    "Key": "TWICE「BDZ」Music Video.mp4"

}


Now, I can see this object on S3 GUI. Look at the "ETag", there is "-2". This means the object are uploaded with multipart-uplad and the splited objects was 2. This is the reason why the "ETag" of some objects is not matched. 


Reference


[ 1 ] https://docs.aws.amazon.com/cli/latest/reference/s3api/index.html

[ 2 ] https://www.computerhope.com/unix/usplit.htm

How to Configuration VXLAN in Ubuntu 16.04


In this post, I will configure VXLAN example. I will also utilize the linux-bridge to define L2 domain. Test environment is looks like below. 


The concepts are difficult to understand. However, the steps are not difficult.


1. Install the Linux Bridge and configuration.


In this step, I will create Linux Bridge and Interface on each hosts. The IP address in the same broadcasting is set on each interface.


apt-get install bridge-utils

brctl addbr vbr0


# brctl addbr vbr0

# ip link show vbr0

5: vbr0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000

link/ether ee:c0:cb:d2:4b:ca brd ff:ff:ff:ff:ff:ff


ip address add 192.168.0.1/24 dev vbr0

ifconfig vbr0 up


# ip address add 192.168.10.11/24 dev vbr0

# ifconfig vbr0 up

# ip addr show vbr0

5: vbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000

    link/ether ee:c0:cb:d2:4b:ca brd ff:ff:ff:ff:ff:ff

    inet 192.168.10.11/24 scope global vbr0

       valid_lft forever preferred_lft forever

    inet6 fe80::ecc0:cbff:fed2:4bca/64 scope link

       valid_lft forever preferred_lft forever


2. Configure VXLAN with Unicast


I will create VTEP interface with the command below. I can check the detail information with “-d” option.


ip link add name vxlan42 type vxlan id 42 dev bond0 remote 147.75.73.195 local 147.75.75.185 dstport 4789

# ip -d link show vxlan42

6: vxlan42: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN mode DEFAULT group default qlen 1000

    link/ether aa:6f:fc:d6:7a:96 brd ff:ff:ff:ff:ff:ff promiscuity 0

    vxlan id 42 remote 147.75.73.195 local 147.75.75.185 dev bond0 srcport 0 0 dstport 4789 ageing 300 addrgenmode eui64


3. Add VXLAN interface on Linux Bridge


However, it is not enough to communicate over tunnel. In this case, the traffic of “192.168.10.0/24” can not pass over the Linux Bridge. Thus, It is necessary for VXLAN interface to attach on the Linux Bridge.


brctl addif vbr0 vxlan42

# ifconfig vxlan42 up

# brctl show

bridge name     bridge id               STP enabled     interfaces

vbr0            8000.aa6ffcd67a96       no              vxlan42


4. Testing and analysis


I will do ping with one of “192.168.10.0/24” IP address. 


ping 192.168.10.21

PING 192.168.10.21 (192.168.10.21) 56(84) bytes of data.

64 bytes from 192.168.10.21: icmp_seq=1 ttl=64 time=0.291 ms

64 bytes from 192.168.10.21: icmp_seq=2 ttl=64 time=0.284 ms

64 bytes from 192.168.10.21: icmp_seq=3 ttl=64 time=0.314 ms

64 bytes from 192.168.10.21: icmp_seq=4 ttl=64 time=0.317 ms


And I will dump packet during sending the packets. From the result, I can confirm “ICMP packets are encapsulated over VXLAN”


tcpdump -ni bond0 not port 22 and not port 23

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on bond0, link-type EN10MB (Ethernet), capture size 262144 bytes

05:34:07.415035 IP 147.75.75.185.32933 > 147.75.73.195.4789: VXLAN, flags [I] (0x08), vni 42

IP 192.168.10.11 > 192.168.10.21: ICMP echo request, id 2832, seq 1, length 64

05:34:07.415264 IP 147.75.73.195.51434 > 147.75.75.185.4789: VXLAN, flags [I] (0x08), vni 42

IP 192.168.10.21 > 192.168.10.11: ICMP echo reply, id 2832, seq 1, length 64

05:34:08.414164 IP 147.75.75.185.32933 > 147.75.73.195.4789: VXLAN, flags [I] (0x08), vni 42

IP 192.168.10.11 > 192.168.10.21: ICMP echo request, id 2832, seq 2, length 64





Reference Links


[ 1 ] https://serverfault.com/questions/777179/configuring-vxlan-unicast-in-linux

[ 2 ] https://www.tldp.org/HOWTO/BRIDGE-STP-HOWTO/set-up-the-bridge.html

[ 3 ] https://www.kernel.org/doc/Documentation/networking/vxlan.txt

[ 4 ] https://blog.scottlowe.org/2013/09/04/introducing-linux-network-namespaces/

[ 5 ] http://www.codeblogbt.com/archives/301596



How to Configure “ipvsadm” in Ubuntu 16.04

 

I recently the existence of this “ipvsadm” which used as the load-balancer.
“ipvsadm” is referred linux kernel load-balancer, which is also called “LVS, Linux Virtual Server”.
This LVS has 3 mode such as DR (Direct Routing), Tunnel and Masquerade. In this post, I will handle DSR and Masquerade (Network Address Translation, NAT).

 

Direct Routing : The option value is default with “-g”. The packet is send without modifying. The servers receives the packets from “ipvsadm” response to client directly.

 

Network Address Translation (NAT) : This option is adapted with “-m”. The packet is send with modifying the destination IP address. (The source IP address is not modified). The servers have to response the “ipvsadm”. Usually, The servers indicate “ipvsadm” as the default gateway.

 

My test environment is set on Ubuntu 16.04. I used AWS IaaS.

 

 

 


1. DR mode
 

1-1. ipvsadm configuration

 

Enable the IP forwarding, because the “ipvsadm” has the role to transfer and distribute received packets. To enable, edit “net.ipv4.ip_forward=1“ in “/etc/sysctl.conf” and run “sysctl -p /etc/sysctl.conf” or “sysctl –p” to apply this.
It can be done with, echo 1 > /proc/sys/net/ipv4/conf/all/forwarding, alternatively.

 

Configure virtual server, there are two steps. First, create the virtual server with traffic distribute method such as round-robin. Second, register servers to distribute the packets.
ipvsadm -C 
ipvsadm -A -t 10.10.0.244:80 -s rr
ipvsadm -a -t 10.10.0.244:80 -r 10.10.0.233:80 -g
After this configuration, I can confirm the status with “ipvsadm –Ln”, “ipvsadm –Lcn”, and “ipvsadm -l –-stats”


 
“ipvsadm –Ln” show the mapping information with forward method. In this case, the received packet with “10.10.0.244:80” will be routed to “10.10.0.233:80”.
 


“ipvsadm –Lcn” show the current session information. At this time, there is no con-current connection now.
 


“ipvsadm -l –-stats” show the information for in/out traffic information.

 

 

1-2. Servers configuration


In DR mode, the server received the packet without modifying. And the server response to the client directly. However, the packet drop can be happened in client side, because the client receive the packet from the server with server’s IP address. To resolve this issue, the server need to set the loopback interface with service IP address. In this case, the service IP address should be “10.10.0.244”.
ifconfig lo:0 10.10.0.244 netmask 255.255.255.255
 


LVS Direct Routing works by forwarding packets to the MAC address of servers. In this case, we have to consider “Linux ARP flux” problem. The server should not answer ARP request for “10.10.0.244”. For, this, I added in “/etc/sysctl.conf”.
net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2

 

 

1-3. Testing and analysis


Client send the request with “curl http://10.10.0.244” and get the response from server. I dumped the “ipvsadm” and “server”.
Loot at the “ipvsadm” result. I can see there is no change of source and destination IP address. 
 


This is something strange. “ipvsadm” has “10.10.0.244”, so it looks like sending to myself. This is the DR mode property, which works by MAC address of servers. Look at the connection information with “ipvsadm –Lcn”, the destination IP address can be shown.
 


At this time, what happened in the server, Look at the below. The packet was received with “10.10.0.244”. And response to this IP address. More important thing is response packet to client. The server send the packet, which has “10.10.0.244” as the source IP address. Because of this, the client does not dropt the packet.
 

 


2. Network Address Translation Mode


In NAT mode, the response should be return to the “ipvsadm”. However, the source IP address does not modified and sent to the server. NAT mode only modify the destination IP address.

 

2-1. ipvsadm configuration


Enable the IP forwarding, because the “ipvsadm” has the role to transfer and distribute received packets. To enable, edit “net.ipv4.ip_forward=1“ in “/etc/sysctl.conf” and run “sysctl -p /etc/sysctl.conf” or “sysctl –p” to apply this.
It can be done with, echo 1 > /proc/sys/net/ipv4/conf/all/forwarding, alternatively.
 
Configure virtual server, there are two steps. First, create the virtual server with traffic distribute method such as round-robin. Second, register servers to distribute the packets.
ipvsadm -C
ipvsadm -A -t 10.10.0.244:80 -s rr
ipvsadm -a -t 10.10.0.244:80 -r 10.10.0.233:80 –m

“ipvsadm –Ln” show the forward method is changed from “Route” to “Masq”
 

2-2. Server configuration


Server received the packet which is modified. Remember “ipvsadm” does not change the source IP address. In this case, the response will be return to client directly.
 
I use same network topology above. Therefore, “ipvsadm” and server are located on the same network. So, I can add some “static route” to transfer the response to “ipvsadm”.
route add -net 10.10.1.0 netmask 255.255.255.0 gw 10.10.0.244
 

 

2-3. Testing and analysis


Look at the “ipvsadm” packet flow from TCP dump. It show that the destination IP address is modified from 10.10.0.244 to 10.10.0.233. In response, the source IP address is also modified from 10.10.0.233 to 10.10.0.244.
 


Look at the server packet flow. The server do only normal processing. 

 

 

 

2-4. “ipvsadm” with SNAT (L3 mode, Proxy mode)


So far, “ipvsadm” and server are located on the same network. Therefore, It will be easy to construct LVS with NAT mode, using “static routing” method on server side. However, “ipvsadm” and servers can be located on different network.
For L3 environment, “ipvsadm” have modify the source IP address when the packet sent to server. I will add some rule in “iptables”.
Before, we add this rule, we need to add some configure in “/etc/sysctl.conf”. The iptables does not work without this options below.
net.ipv4.vs.conntrack = 1
net.ipv4.vs.snat_reroute = 1

 


After this, I add the rule into iptables with “-m”.
iptables -t nat -A POSTROUTING -o eth0 --dst 10.10.0.233 -m ipvs --ipvs --vaddr 10.10.0.244 --vport 80 --vmethod masq -j SNAT --to-source 10.10.0.244
 


Then, we can see the packet flow with TCP dump. The source IP is not client IP address, any more. The source IP address will be modified to send the server.


 

3. ipvsadmin with MARK of iptables


Occasionally, we need to use the MARK configuration of iptables. The PREROUTING will be used for this. Two steps are necessary. First, the received packet from client should be marked with iptables. Second, the marked packet should be distributed to servers.
To mark at the packet, I have to use mangle table. Mangle table is used for mark and QoS. In this case, I insert the rule like below
iptables  -A PREROUTING -t mangle -d 10.10.0.244/32 -j MARK --set-mark 1
 

 

And then, I edit the “ipvs” configuration.
ipvsadm -C 
ipvsadm -A -f 1  -s rr
ipvsadm -a -f 1 -r 10.10.0.233:0 –m

After then, I can see some change are happened. “FWM 1” mean MARK information in iptables.
 


Reference Links


[ 1 ] http://www.ultramonkey.org/papers/lvs_tutorial/html/
[ 2 ] http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.persistent_connection.html
[ 3 ] https://techiess.com/2010/09/09/load-balancing-dsr-direct-server-return/
[ 4 ] https://www.cyberciti.biz/faq/ubuntu-linux-add-static-routing/
[ 5 ] https://terrywang.net/2016/02/02/new-iptables-gotchas.html
[ 6 ] https://bugs.launchpad.net/ubuntu/+source/keepalived/+bug/1641918
[ 7 ] http://www.loadbalancer.org/blog/enabling-snat-in-lvs-xt_ipvs-and-iptables/
[ 8 ] http://manpages.ubuntu.com/manpages/trusty/man8/ipvsadm.8.html


 

 

+ Recent posts