Quantcast
Channel: Vert.x
Viewing all 158 articles
Browse latest View live

Real-time bidding with Websockets and Vert.x

$
0
0

The expectations of users for interactivity with web applications have changed over the past few years. Users during bidding in auction no longer want to press the refresh button to check if the price has changed or the auction is over. This made bidding difficult and less fun. Instead, they expect to see the updates in application in real-time.

In this article I want to show how to create a simple application that provides real-time bidding. We will use WebSockets, SockJS and Vert.x.

We will create a front-end for fast bidding that communicates with a micro-service written in Java and based on Vert.x.

What are Websockets?

WebSocket is asynchronous, bidirectional, full-duplex protocol that provides a communication channel over a single TCP connection. With the WebSocket API it provides bidirectional communication between the website and a remote server.

WebSockets solve many problems which prevented the HTTP protocol from being suitable for use in modern, real-time applications. Workarounds like polling are no longer needed, which simplifies application architecture. WebSockets do not need to open multiple HTTP connections, they provide a reduction of unnecessary network traffic and reduce latency.

Websocket API vs SockJS

Unfortunately, WebSockets are not supported by all web browsers. However, there are libraries that provide a fallback when WebSockets are not available. One such library is SockJS. SockJS starts from trying to use the WebSocket protocol. However, if this is not possible, it uses a variety of browser-specific transport protocols. SockJS is a library designed to work in all modern browsers and in environments that do not support WebSocket protocol, for instance behind restrictive corporate proxy. SockJS provides an API similar to the standard WebSocket API.

Frontend to fast bidding

Auction web page contains the bidding form and some simple JavaScript which loads current price from the service, opens an event bus connection to the SockJS server and offers bidding. HTML source code of sample web page on which we bid might look like this:

<h3>Auction 1h3><divid="error_message">div><form>
    Current price:
    <spanid="current_price">span><div><labelfor="my_bid_value">Your offer:label><inputid="my_bid_value"type="text"><inputtype="button"onclick="bid();"value="Bid">div><div>
        Feed:
        <textareaid="feed"rows="4"cols="50"readonly>textarea>div>form>

We use the vertx-eventbus.js library to create a connection to the event bus. vertx-eventbus.js library is a part of the Vert.x distribution. vertx-eventbus.js internally uses SockJS library to send the data to the SockJS server. In the code snippet below we create an instance of the event bus. The parameter to the constructor is the URI where to connect to the event bus. Then we register the handler listening on address auction.. Each client has a possibility of registering at multiple addresses e.g. when bidding in the auction 1234, they register on the address auction.1234 etc. When data arrives in the handler, we change the current price and the bidding feed on the auction’s web page.

functionregisterHandlerForUpdateCurrentPriceAndFeed(){
    var eventBus = new EventBus('http://localhost:8080/eventbus');
    eventBus.onopen = function(){
        eventBus.registerHandler('auction.' + auction_id, function(error, message){
            document.getElementById('current_price').innerHTML = JSON.parse(message.body).price;
            document.getElementById('feed').value += 'New offer: ' + JSON.parse(message.body).price + '\n';
        });
    }
};

Any user attempt to bid generates a PATCH Ajax request to the service with information about the new offer made at auction (see bid() function). On the server side we publish this information on the event bus to all clients registered to an address. If you receive an HTTP response status code other than 200 (OK), an error message is displayed on the web page.

functionbid(){
    var newPrice = document.getElementById('my_bid_value').value;

    var xmlhttp = (window.XMLHttpRequest) ? new XMLHttpRequest() : new ActiveXObject("Microsoft.XMLHTTP");
    xmlhttp.onreadystatechange = function(){
        if (xmlhttp.readyState == 4) {
            if (xmlhttp.status != 200) {
                document.getElementById('error_message').innerHTML = 'Sorry, something went wrong.';
            }
        }
    };
    xmlhttp.open("PATCH", "http://localhost:8080/api/auctions/" + auction_id);
    xmlhttp.setRequestHeader("Content-Type", "application/json");
    xmlhttp.send(JSON.stringify({price: newPrice}));
};

Auction Service

SockJS client requires the server-side part. Now we are going to create a light-weight RESTful auction service. We will send and retrieve data in JSON format. Let’s start by creating a verticle. First we need to inherit from AbstractVerticle and override the start method. Each verticle instance has a member variable called vertx. This provides access to the Vert.x core API. For example, to create an HTTP server you call the createHttpServer method on vertx instance. To tell the server to listen on port 8080 for incoming requests you use the listen method.

We need a router with routes. A router takes an HTTP request and finds the first matching route. The route can have a handler associated with it, which receives the request (e.g. route that matches path /eventbus/* is associated with eventBusHandler).

We can do something with the request, and then, end it or pass it to the next matching handler.

If you have a lot of handlers it makes sense to split them up into multiple routers.

You can do this by mounting a router at a mount point in another router (see auctionApiRouter that corresponds to /api mount point in code snippet below).

Here’s an example verticle:

publicclassAuctionServiceVerticleextendsAbstractVerticle{

    @Overridepublicvoidstart(){
        Router router = Router.router(vertx);

        router.route("/eventbus/*").handler(eventBusHandler());
        router.mountSubRouter("/api", auctionApiRouter());
        router.route().failureHandler(errorHandler());
        router.route().handler(staticHandler());

        vertx.createHttpServer().requestHandler(router::accept).listen(8080);
    }

    //…
}

Now we’ll look at things in more detail. We’ll discuss Vert.x features used in verticle: error handler, SockJS handler, body handler, shared data, static handler and routing based on method, path etc.

Error handler

As well as setting handlers to handle requests you can also set a handler for failures in routing. Failure in routing occurs if a handler throws an exception, or if a handler calls fail method. To render error pages we use error handler provides by Vert.x:

private ErrorHandler errorHandler(){
    return ErrorHandler.create();
}

SockJS handler

Vert.x provides SockJS handler with the event bus bridge which extends the server-side Vert.x event bus into client side JavaScript.

Configuring the bridge to tell it which messages should pass through is easy. You can specify which matches you want to allow for inbound and outbound traffic using the BridgeOptions. If a message is outbound, before sending it from the server to the client side JavaScript, Vert.x will look through any outbound permitted matches. In code snippet below we allow any messages from addresses starting with “auction.” and ending with digits (e.g. auction.1, auction.100 etc).

If you want to be notified when an event occurs on the bridge you can provide a handler when calling the bridge. For example, SOCKET_CREATED event will occur when a new SockJS socket is created. The event is an instance of Future. When you are finished handling the event you can complete the future with “true” to enable further processing.

To start the bridge simply call bridge method on the SockJS handler:

private SockJSHandler eventBusHandler(){
    BridgeOptions options = new BridgeOptions()
            .addOutboundPermitted(new PermittedOptions().setAddressRegex("auction\\.[0-9]+"));
    return SockJSHandler.create(vertx).bridge(options, event -> {
         if (event.type() == BridgeEventType.SOCKET_CREATED) {
            logger.info("A socket was created");
        }
        event.complete(true);
    });
}

Body handler

The BodyHandler allows you to retrieve the request body, limit the body size and to handle the file upload. Body handler should be on a matching route for any requests that require this functionality. We need BodyHandler during the bidding process (PATCH method request /auctions/ contains request body with information about a new offer made at auction). Creating a new body handler is simple:

BodyHandler.create();

If request body is in JSON format, you can get it with getBodyAsJson method.

Shared data

Shared data contains functionality that allows you to safely share the data between different applications in the same Vert.x instance or across a cluster of Vert.x instances. Shared data includes local shared maps, distributed, cluster-wide maps, asynchronous cluster-wide locks and asynchronous cluster-wide counters.

To simplify the application we use the local shared map to save information about auctions. The local shared map allows you to share data between different verticles in the same Vert.x instance. Here’s an example of using a shared local map in an auction service:

publicclassAuctionRepository{

    //…public OptionalgetById(String auctionId){
        LocalMap auctionSharedData = this.sharedData.getLocalMap(auctionId);

        return Optional.of(auctionSharedData)
            .filter(m -> !m.isEmpty())
            .map(this::convertToAuction);
    }

    publicvoidsave(Auction auction){
        LocalMap auctionSharedData = this.sharedData.getLocalMap(auction.getId());

        auctionSharedData.put("id", auction.getId());
        auctionSharedData.put("price", auction.getPrice());
    }

    //…
}

If you want to store auction data in a database, Vert.x provides a few different asynchronous clients for accessing various data storages (MongoDB, Redis or JDBC client).

Auction API

Vert.x lets you route HTTP requests to different handlers based on pattern matching on the request path. It also enables you to extract values from the path and use them as parameters in the request. Corresponding methods exist for each HTTP method. The first matching one will receive the request. This functionality is particularly useful when developing REST-style web applications.

To extract parameters from the path, you can use the colon character to denote the name of a parameter. Regular expressions can also be used to extract more complex matches. Any parameters extracted by pattern matching are added to the map of request parameters.

Consumes describes which MIME types the handler can consume. By using produces you define which MIME types the route produces. In the code below the routes will match any request with content-type header and accept header that matches application/json.

Let’s look at an example of a subrouter mounted on the main router which was created in start method in verticle:

private Router auctionApiRouter(){
    AuctionRepository repository = new AuctionRepository(vertx.sharedData());
    AuctionValidator validator = new AuctionValidator(repository);
    AuctionHandler handler = new AuctionHandler(repository, validator);

    Router router = Router.router(vertx);
    router.route().handler(BodyHandler.create());

    router.route().consumes("application/json");
    router.route().produces("application/json");

    router.get("/auctions/:id").handler(handler::handleGetAuction);
    router.patch("/auctions/:id").handler(handler::handleChangeAuctionPrice);

    return router;
}

The GET request returns auction data, while the PATCH method request allows you to bid up in the auction. Let’s focus on the more interesting method, namely handleChangeAuctionPrice. In the simplest terms, the method might look like this:

publicvoidhandleChangeAuctionPrice(RoutingContext context){
    String auctionId = context.request().getParam("id");
    Auction auction = new Auction(
        auctionId,
        new BigDecimal(context.getBodyAsJson().getString("price"))
    );

    this.repository.save(auction);
    context.vertx().eventBus().publish("auction." + auctionId, context.getBodyAsString());

    context.response()
        .setStatusCode(200)
        .end();
}

PATCH request to /auctions/1 would result in variable auctionId getting the value 1. We save a new offer in the auction and then publish this information on the event bus to all clients registered on the address on the client side JavaScript. After you have finished with the HTTP response you must call the end function on it.

Static handler

Vert.x provides the handler for serving static web resources. The default directory from which static files are served is webroot, but this can be configured. By default the static handler will set cache headers to enable browsers to cache files. Setting cache headers can be disabled with setCachingEnabled method. To serve the auction HTML page, JS files (and other static files) from auction service, you can create a static handler like this:

private StaticHandler staticHandler(){
    return StaticHandler.create()
        .setCachingEnabled(false);
}

Let’s run!

Full application code is available on github.

Clone the repository and run ./gradlew run.

Open one or more browsers and point them to http://localhost:8080. Now you can bid in auction:

Real time bidding in application

Summary

This article presents the outline of a simple application that allows real-time bidding. We created a lightweight, high-performance and scalable micro-service written in Java and based on Vert.x. We discussed what Vert.x offers, among others, a distributed event bus and an elegant API that allows you to create applications in no time.


Using Hamcrest Matchers with Vert.x Unit

$
0
0

Vert.x Unit is a very elegant library to test asynchronous applications developed with vert.x. However because of this asynchronous aspect, reporting test failures is not natural for JUnit users. This is because, the failed assertions need to be reported to the test context, controlling the execution (and so the outcome) of the test. In other words, in a Vert.x Unit test you cannot use the regular Junit assertions and assertion libraries. In this blog post, we propose a way to let you using Hamcrest matchers in Vert.x Unit tests.

Using Vert.x Unit

Vert.x Unit is a test library made to ensure the behavior of vert.x applications. It lets you implement tests checking asynchronous behavior.

Vert.x Unit can be used with Junit. For this, you just need to add the following dependency to your project:

<dependency><groupId>io.vertxgroupId><artifactId>vertx-unitartifactId><version>3.2.0version><scope>testscope>dependency>

If you are using Gradle, the dependency is:

testCompileio.vertx:vertx-unit:3.2.0

If you are using an IDE, just add the vertx-unit jar to your project classpath.

Obviously, you would need to add JUnit too.

Notice that vertx-unit does not need JUnit, and can be used without it. Check the Vert.x Unit documentation for more details.

Vert.x Unit example

Let’s consider this very simple Verticle:

publicclassMyFirstVerticleextendsAbstractVerticle{

  @Overridepublicvoidstart(final Future future)throws Exception {
    vertx.createHttpServer()
        .requestHandler(req -> req.response().end("hello vert.x"))
        .listen(8080, done -> {
          if (done.failed()) {
            future.fail(done.cause());
          } else {
            future.complete();
          }
        });
  }
}

It just creates a new HTTP server and when launched it notifies the future of the completion.

To test this verticle with Vert.x Unit you would write something like:

@RunWith(VertxUnitRunner.class)
publicclassMyFirstVerticleTest{

  private Vertx vertx;

  @BeforepublicvoidsetUp(TestContext context){
    vertx = Vertx.vertx();
    vertx.deployVerticle(MyFirstVerticle.class.getName(),
      context.asyncAssertSuccess());
  }

  @Testpublicvoidtest(TestContext context){
    Async async = context.async();
    vertx.createHttpClient().get(8080, "localhost", "/")
      .handler(response -> {
        context.assertEquals(200, response.statusCode());
        response.bodyHandler(buffer -> {
          context.assertEquals("hello vert.x", buffer.toString("utf-8"));
          async.complete();
        });
      })
      .end();
  }
}

First, the test class is annotated with @RunWith(VertxUnitRunner.class), instructing JUnit to use this special runner. This runner lets you inject a TestContext parameter into every test methods (as well as @Before and @After) to handle the asynchronous aspect of the test.

In the setUp method, it creates a new instance of Vertx and deploy the verticle. Thanks to context.asyncAssertSuccess(), it waits until the successful completion of the verticle deployment. Indeed, the deployment is asynchronous, and we must be sure that the verticle has been deployed and has completed its initialization before starting to test it.

The test() method creates an Async object that will be used to report when the test has been completed. Then it creates an HTTP client to emit a request on the server from our verticle and check that:

  1. the HTTP code is 200 (OK)
  2. the body is hello vert.x

As you can see, to implement the checks, the assertions method are called on the TestContext object, which control the test execution. When everything has been tested, we call async.complete() to end the test. If an assertion failed, the test is obviously stopped. This would not be the case if you would use regular Junit assertions.

Using the Hamcrest Matchers

In the previous example, we used the the assertions available from the TestContext instance. However it provides a limited set of methods. Hamcrest is a library of matchers, which can be combined in to create flexible expressions of intent in tests. It is very convenient when testing complex applications.

Hamcrest cannot be used directly as it would not report the failure on the TestContext. For this purpose we create a VertxMatcherAssert class:

publicclassVertxMatcherAssert{

  publicstaticvoidassertThat(TestContext context, T actual,
    Matcher super T> matcher){
    assertThat(context, "", actual, matcher);
  }

  publicstaticvoidassertThat(TestContext context, String reason,
    T actual, Matcher super T> matcher){
    if (!matcher.matches(actual)) {
      Description description = new StringDescription();
      description.appendText(reason)
          .appendText("\nExpected: ")
          .appendDescriptionOf(matcher)
          .appendText("\n     but: ");
      matcher.describeMismatch(actual, description);
      context.fail(description.toString());
    }
  }

  publicstaticvoidassertThat(TestContext context, String reason,
    boolean assertion){
    if (!assertion) {
      context.fail(reason);
    }
  }
}

This class provides assertThat method that reports error on the given TestContext. The complete code is available here.

With this class, we can re-implement our test as follows:

@TestpublicvoidtestWithHamcrest(TestContext context){
  Async async = context.async();
  vertx.createHttpClient().get(8080, "localhost", "/").handler(response -> {
    assertThat(context, response.statusCode(), is(200));
    response.bodyHandler(buffer -> {
      assertThat(context, buffer.toString("utf-8"), is("hello vert.x"));
      async.complete();
    });
  }).end();
}

To ease the usage, I’ve added two import static:

importstatic io.vertx.unit.example.VertxMatcherAssert.assertThat;
importstatic org.hamcrest.core.Is.is;

You can use any Hamcrest matcher, or even implement your own as soon as you use the assertThat method provided by VertxMatcherAssert.

Conclusion

In this post we have seen how you can combine Hamcrest and Vert.x Unit. So, you are not limited anymore by the set of assert methods provided by Vert.x Unit, and can use the whole expressiveness of Hamcrest Matchers.

Don’t forget that you still can’t use the assert methods from Junit, as they don’t report on the TestContext.

Intro to Vert.x Shell

$
0
0

Vert.x Shell provides an extensible command line for Vert.x, accessible via SSH, Telnet or a nice Web interface. Vert.x Shell comes out of the box with plenty of commands for Vert.x which makes it very handy for doing simple management operations like deploying a Verticle or getting the list of deployed Verticles. One power feature of Vert.x Shell is its extensibility: one can easily augment Vert.x Shell with its own commands. Let’s build an http-client in JavaScript!

Booting the Shell

Vert.x Shell can be started in a couple of lines depending on the connectors you configure. The documentation provides several examples showing the Shell Service configuration. For testing our command, we will use the Telnet protocol because it is easy to configure and use, so we just need to copy the corresponding section in vertx-http-client.js:

var ShellService = require("vertx-shell-js/shell_service");
var service = ShellService.create(vertx, {
  "telnetOptions" : {
    "host" : "localhost",
    "port" : 4000
  }
});
service.start();

We can run it:

Juliens-MacBook-Pro:java julien$ vertx run vertx-http-client.js
Succeededin deploying verticle

And connect to the shell:

Juliens-MacBook-Pro:~ julien$ telnet localhost 5000
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
__      __ ______  _____  _______  __   __
\ \    / /|  ____||  _  \|__   __| \ \ / /
 \ \  / / | |____ | :_) |   | |     \   /
  \ \/ /  |  ____||   __/   | |      > /
   \  /   | |____ | |\ \    | |     / //\
\/    |______||_| \_\   |_| o  /_/ \_\


%

You can now already use the shell, the help command lists the available commands.

Creating a command

For the sake of simplicity we will write a single script that starts the Shell service and deploys our command. In the real world you would probably have the command in one file and the deployment in another.

The documentation explains how to add a new command to Vert.x shell, we can just copy this section and append it to the vertx-http-client.js script:

var CommandBuilder = require("vertx-shell-js/command_builder");
var CommandRegistry = require("vertx-shell-js/command_registry");

var builder = CommandBuilder.command("http-client");
builder.processHandler(function(process){

  // Write a message to the console
  process.write("Implement the client\n");

  // End the process
  process.end();
});

// Register the commandvar registry = CommandRegistry.getShared(vertx);
registry.registerCommand(builder.build(vertx));

Now you can use the command just to see it in action:

% http-client
Implement the client
%

Checking arguments

The http-client requires an url argument, an argument check is performed at the beginning of the process handler:

// Check the url argumentif (process.args().length <1) {
  process.write("Missing URL\n").end();
  return;
}
var url = process.args()[0];

Implementing the command

The final step of this tutorial is the actual implementation of the client logic based on Vert.x HttpClient:

// Create the client requestvar request = client.getAbs(url, function(response){

  // Print the response in the shell console
  response.handler(function(buffer){
    process.write(buffer.toString("UTF-8"));
  });

  // End the command when the response ends
  response.endHandler(function(){
    process.end();
  });
});

// Set a request handler to end the command with error
request.exceptionHandler(function(err){
  process.write("Error: " + err.getMessage());
  process.end();
});

// End the http request
request.end();

And we can test the command in the shell:

% http-client http://vertx.io
http-client http://vertx.io
Vert.x......
/javascripts/sticky_header.js>%

Finally

We have seen how easy it is to extend Vert.x with a shell and create an http-client custom command, you can get the full source code here.

Our command is very simple, it only implements the very minimum, in future posts we will improve the command with support with more HTTP methods, SSL support or header support with the the Vert.x CLI API.

Vert.x 3.2.1 is released !

$
0
0

We are pleased to announce the release of Vert.x 3.2.1!

The release contains many bug fixes and a ton of small improvements, such as future composition, improved Ceylon support, Stomp virtual host support, performance improvements… Full release notes can be found here:

https://github.com/vert-x3/wiki/wiki/3.2.1---Release-Notes

Breaking changes are here:

https://github.com/vert-x3/wiki/wiki/3.2.1---Breaking-Changes

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Dockers images are also available on the Docker Hub The vert.x distribution is also available from SDKMan.

Many thanks to all the committers and community whose contributions made this possible.

Next stop is Vert.x 3.3.0 which we hope to have out in May 2016.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Happy coding !

Vertx 3 and Azure cloud platform tutorial

$
0
0

Vert.x 3.2.1 applications can quickly be deployed on Microsoft Azure. Deployment is independent of your build so it is all about configuration.

About Azure

Azure by design does not support multicast on the network virtualization level, however all virtual machines defined on the same group are deployed on the same network (by default), so TCP-IP discovery can be enabled and quickly setup to form a cluster.

This how you would deploy your app:

  1. create a fat-jar with your app
  2. create a cluster.xml with tcp-ip discovery
  3. run your app with: cp folder_of_your_cluster_xml_file -cluster -cluster-host VM_PRIVATE_IP

Screencast

The following screencast

Don’t forget to follow our youtube channel!

Vertx 3 and Keycloak tutorial

$
0
0

With the upcoming release of Vert.x 3.3 securing your application with Keycloak is even easier than before.

About Keycloak

Keycloak describes itself as an Open Source Identity and Access Management For Modern Applications and Services.

With Keycloak you can quickly add Authentication and Authorization to your vert.x application. The easy way is to setup a realm on keycloak and once you’re done, export the configuration to your vert.x app.

This how you would secure your app:

  1. create a OAuth2Auth instance with OAuth2Auth.createKeycloak(...)
  2. copy your config from the keycloak admin GUI
  3. setup your callback according to what you entered on keycloak
  4. secure your resource with router.route("/protected/*").handler(oauth2)

Screencast

The following screencast explains how you can do this from scratch:

Don’t forget to follow our youtube channel!

Vert.x 3.3.0 is released !

$
0
0

That was a long run …. but here we are. We are very pleased to announce the release of Vert.x 3.3.0!

This release is huge with lots of new features, improvements, and obviously bug fixes. We won’t detail all the new features here (some are highlighted below), and full release notes are available: https://github.com/vert-x3/wiki/wiki/3.3.0---Release-Notes

Breaking changes are there: https://github.com/vert-x3/wiki/wiki/3.3.0---Breaking-Changes. Be sure to read them before migrating.

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Docker images are also available on the Docker Hub. The Vert.x distribution is also available from SDKMan and HomeBrew.

Let’s highlight some of the major features shipped with this release.

  • Vertx 3.3.0 is the first version to support HTTP2 (client and server). You can now configure HTTP servers and clients to use HTTP2. Proxy support for TCP and HTTP client has also been added.
  • This version shows also the introduction of a bridge with Apache Camel. So, integrating Vert.x applications with legacy systems (using EIP) has never been so easy.
  • Several new components have been developed to implement microservice-based applications. First, a pluggable service discovery is now available. An implementation of the circuit breaker pattern has also been provided.
  • AMQP 1.0 support has been also integrated thanks to a bridge to send and receive messages from AMQP. A client has also been shipped to interact directly with an AMQP broker or router.
  • New metrics has also been introduced to ease the monitoring of running applications. For instance, it’s now possible to monitor the thread usage in the worker thread pool and in the JDBC connection pools.
  • With this version, you can configure the TCP aspects of the event bus for, for instance, use SSL. Also notice a bridge between the event bus of Vert.x 2 and the one from Vert.x 3.
  • Most of the delivered components are now deployable in OSGi environments. So you can easily integrate Vert.x in Apache Karaf, Service Mix, or Apache Sling.
  • Vert.x Unit usability has been greatly improved. It is now possible to write test using Hamcrest, AssertJ, Rest Assured, or any assertion libraries you want.

Many thanks to all the committers and community whose contributions made this possible, especially to Alex Lehman, Paul Bakker, Robbie Gemmel, Claus Ibsen, Michael Kremer, and many many other!

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Just a word about the future. As we did last, year, a poll will be organized in the next few weeks to collect ideas and prioritize the Vert.x 3.4 and beyond roadmap. Stay tuned, we love hearing about your ideas and issues.

Happy coding !

Vert.x 3.3.2 is released !

$
0
0

We have just released Vert.x 3.3.2, the first bug fix release of Vert.x 3.3.x.

We have first released 3.3.1 that fixed a few bugs, but a couple of new bugs were discovered after 3.3.1 was tagged but not announced, we decided to release a 3.3.2 to fix the discovered bugs, as these bugs were preventing usage of Vert.x.

Vert.x 3.3.1 release notes:

Vert.x 3.3.2 release notes:

These releases do not contain breaking changes.

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Docker images are also available on the Docker Hub. The Vert.x distribution is also available from SDKMan and HomeBrew.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Happy coding !


Vert.x fall conferences

$
0
0

A lot of Vert.x conferences are planned this fall around the world, here is a quick recap of these events:

If you are going at one of these conferences, don’t miss the opportunity to learn more about Vert.x!

Vert.x Blueprint Tutorials

$
0
0

The Vert.x Blueprint project aims to provide guidelines to Vert.x users to implement various applications such as message-based applications and microservices. This post introduces the content of each blueprints.

This work has been done in the context of a Google Summer of Code project .

Overview

The blueprint project contains three parts: Todo Backend, Vert.x Kue and Online Shopping Microservice. Both runnable code and very detailed documents and tutorials (both in English and Chinese) are provided.

Vert.x Blueprint - Todo Backend

Repository: sczyh30/vertx-blueprint-todo-backend.

This blueprint is a todo-backend implementation using Vert.x and various persistence (e.g. Redis or MySQL). It is intended to be an introduction to basic Vert.x web RESTful service development. From this blueprint, developers learn:

  • What is Vert.x and its principles
  • What is and how to use Verticle
  • How to develop a REST API using Vert.x Web
  • How to make use of asynchronous development model
  • Future-based asynchronous patterns
  • How to use persistence such as Redis and MySQL with the help of Vert.x async data

The tutorials are:

Vert.x Blueprint - Vert.x Kue

Repository: sczyh30/vertx-blueprint-job-queue.

This blueprint is a priority job queue developed with Vert.x and backed by Redis. It’s a Vert.x implementation version of Automattic/kue that can be used in production.

The list of features provided by Vert.x Kue is available here: Vert.x Kue Features.

This blueprint is intended to be an introduction to message-based application development using Vert.x. From this blueprint, developers learn:

  • How to make use of Vert.x Event Bus (distributed)
  • How to develop message based applications with Vert.x
  • Event and message patterns with the event bus (Pub/sub, point to point)
  • How to design clustered Vert.x applications
  • How to design and implement a job queue
  • How to use Vert.x Service Proxy
  • More complex usage of Vert.x Redis

The tutorial are:

Vert.x Blueprint - Online Shopping Microservice

Repository: sczyh30/vertx-blueprint-microservice.

This blueprint is a micro-shop microservice application developed with Vert.x. It is intended to be an illustration on how to develop microservice applications using Vert.x. From this blueprint, developers learn:

  • Microservice development with Vert.x
  • Asynchronous development model
  • Reactive patterns
  • Event sourcing patterns
  • Asynchronous RPC on the clustered event bus
  • Various type of services (e.g. HTTP endpoint, message source, event bus service)
  • Vert.x Service Discovery
  • Vert.x Circuit Breaker
  • Microservice with polyglot persistence
  • How to implement an API Gateway
  • Global authentication (OAuth 2 + Keycloak)

And many more things…

The tutorial are:

Enjoy the code carnival with Vert.x!

Centralized logging for Vert.x applications using the ELK stack

$
0
0

This post entry describes a solution to achieve centralized logging of Vert.x applications using the ELK stack, a set of tools including Logstash, Elasticsearch, and Kibana that are well known to work together seamlessly.

Table of contents

Preamble

This post was written in context of the project titled “DevOps tooling for Vert.x applications“, one of the Vert.x projects taking place during the 2016 edition of Google Summer of Code, a program that aims to bring together students with open source organizations, in order to help them to gain exposure to software development practices and real-world challenges.

Introduction

Centralized logging is an important topic while building a Microservices architecture and it is a step forward to adopting the DevOps culture. Having an overall solution partitioned into a set of services distributed across the Internet can represent a challenge when trying to monitor the log output of each of them, hence, a tool that helps to accomplish this results very helpful.

Overview

As shown in the diagram below, the general centralized logging solution comprises two main elements: the application server, which runs our Vert.x application; and a separate server, hosting the ELK stack. Both elements are linked by Filebeat, a highly configurable tool capable of shipping our application logs to the Logstash instance, i.e., our gateway to the ELK stack.

Overview of centralized logging with ELK

App logging configuration

The approach described here is based on a Filebeat + Logstash configuration, that means first we need to make sure our app logs to a file, whose records will be shipped to Logstash by Filebeat. Luckily, Vert.x provides the means to configure alternative logging frameworks (e.g., Log4j, Log4j2 and SLF4J) besides the default JUL logging. However, we can use Filebeat independently of the logging framework chosen.

Log4j Logging

The demo that accompanies this post relies on Log4j2 as the logging framework. We instructed Vert.x to use this framework following the guidelines and we made sure our logging calls are made asynchronous, since we don’t want them to block our application. For this purpose, we opted for the AsyncAppender and this was included in the Log4J configuration together with the log output format described in a XML configuration available in the application’s Resource folder.

<Configuration><Appenders><RollingFilename="vertx_logs"append="true"fileName="/var/log/vertx.log"filePattern="/var/log/vertx/$${date:yyyy-MM}/vertx-%d{MM-dd-yyyy}-%i.log.gz"><PatternLayoutpattern="%d{ISO8601} %-5p %c:%L - %m%n" />RollingFile><Asyncname="vertx_async"><AppenderRefref="vertx_logs"/>Async>Appenders><Loggers><Rootlevel="DEBUG"><AppenderRefref="vertx_async" />Root>Loggers>Configuration>

Filebeat configuration

Now that we have configured the log output of our Vert.x application to be stored in the file system, we delegate to Filebeat the task of forwarding the logs to the Logstash instance. Filebeat can be configured through a YAML file containing the logs output location and the pattern to interpret multiline logs (i.e., stack traces). Also, the Logstash output plugin is configured with the host location and a secure connection is enforced using the certificate from the machine hosting Logstash. We set the document_type to the type of instance that this log belongs to, which could later help us while indexing our logs inside Elasticsearch.

filebeat:
  prospectors:
    -
      document_type: trader_dashboard
      paths:
        - /var/log/vertx.log
      multiline:
        pattern: "^[0-9]+"negate: true
        match: after
output:
  logstash:
    enabled: true
    hosts:
      - elk:5044timeout: 15tls:
      insecure: false
      certificate_authoritites:
        - /etc/pki/tls/certs/logstash-beats.crt

ELK configuration

To take fully advantage of the ELK stack with respect to Vert.x and our app logs, we need to configure each of its individual components, namely Logstash, Elasticsearch and Kibana.

Logstash

Logstash is the component within the ELK stack that is in charge of aggregating the logs from each of the sources and forwarding them to the Elasticsearch instance.
Configuring Logstash is straightforward with the help of the specific input and output plugins for Beats and Elasticsearch, respectively. In the previous section we mentioned that Filebeat could be easily coupled with Logstash. Now, we see that this can be done by just specifying Beat as the input plugin and set the parameters needed to be reached by our shippers (listening port, ssl key and certificate location).

input {
  beats {
    port =>5044
    ssl =>true
    ssl_certificate =>"/etc/pki/tls/certs/logstash-beats.crt"
    ssl_key =>"/etc/pki/tls/private/logstash-beats.key"
  }
}

Now that we are ready to receive logs from the app, we can use Logstash filtering capabilities to specify the format of our logs and extract the fields so they can be indexed more efficiently by Elasticsearch.
The grok filtering plugin comes handy in this situation. This plugin allows to declare the logs format using predefined and customized patterns based in regular expressions allowing to declare new fields from the information extracted from each log line. In the following block, we instruct Logstash to recognize our Log4j pattern inside a message field, which contains the log message shipped by Filebeat. After that, the date filtering plugin parses the timestamp field extracted in the previous step and replaces it for the one set by Filebeat after reading the log output file.

filter {
  grok {
    break_on_match =>false
    match =>  [ "message", "%{LOG4J}"]
  }
  date{
    match => [ "timestamp_string", "ISO8601"]
    remove_field => [ "timestamp_string" ]
  }
}

The Log4j pattern is not included within the Logstash configuration, however, we can specify it using predefined data formats shipped with Logstash and adapt it to the specific log formats required in our application, as shown next.

# Pattern to match our Log4j format
SPACING (?:[\s]+)
LOGGER (?:[a-zA-Z$_][a-zA-Z$_0-9]*\.)*[a-zA-Z$_][a-zA-Z$_0-9]*
LINE %{INT}?
LOG4J %{TIMESTAMP_ISO8601:timestamp_string} %{LOGLEVEL:log_level}%{SPACING}%{LOGGER:logger_name}:%{LINE:loc_line} - %{JAVALOGMESSAGE:log_message}

Finally, we take a look at Logstash’s output configuration. This simply points to our elasticsearch instance, instructs it to provide a list of all cluster nodes (sniffing), defines the name pattern for our indices, assigns the document type according to the metadata coming from Filebeat, and allows to define a custom index template for our data.

output {
  elasticsearch {
    hosts => ["localhost"]
    sniffing =>true
    manage_template =>true
    index =>"%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type =>"%{[@metadata][type]}"template =>"/etc/filebeat/vertx_app_filebeat.json"
    template_overwrite =>true
  }
}

Elasticsearch

Elasticsearch is the central component that enables the efficient indexing and real-time search capabilities of the stack. To take the most advantage of Elasticsearch, we can provide an indexing template of our incoming logs, which can help to optimize the data storage and match the queries issued by Kibana at a later point.
In the example below, we see an index template that would be applied to any index matching the pattern filebeat-*. Additionally, we declare our new log fields type, host, log_level, logger_name, and log_message, which are set as not_analyzed except for the last two that are set as analyzed allowing to perform queries based on regular expressions and not restricted to query the full text.

{
  "mappings": {
    "_default_": {
      "_all": {
        "enabled": true,
        "norms": {
          "enabled": false}
      },
      "dynamic_templates": [
        {
          "template1": {
            "mapping": {
              "doc_values": true,
              "ignore_above": 1024,
              "index": "not_analyzed",
              "type": "{dynamic_type}"},
            "match": "*"}
        }
      ],
      "properties": {
        "@timestamp": {
          "type": "date"},
        "offset": {
          "type": "long",
          "doc_values": "true"},
        "type": { "type": "string", "index": "not_analyzed"},
        "host": { "type": "string", "index": "not_analyzed"},
        "log_level": { "type": "string", "index": "not_analyzed"},
        "logger_name": { "type": "string", "index": "analyzed"},
        "log_message": { "type": "string", "index": "analyzed"}
      }
    }
  },
  "settings": {
    "index.refresh_interval": "5s"},
  "template": "filebeat-*"}

Kibana

Although we could fetch all our logs from Elasticsearch through its API, Kibana is a powerful tool that allows a more friendly query and visualization. Besides the option to query our data through the available indexed field names and search boxes allowing typing specific queries, Kibana allows creating our own Visualizations and Dashboards. Combined, they represent a powerful way to display data and gain insight in a customized manner. The accompanied demo ships with a couple of sample dashboards and visualizations that take advantage of the log fields that we specified in our index template and throw valuable insight. This includes: visualizing the number of log messages received by ELK, observe the proportion of messages that each log source produces, and directly find out the sources of error logs.

Kibana Dashboard

Log shipping challenge

The solution presented here relied on Filebeat to ship log data to Logstash. However, if you are familiar with the Log4j framework you may be aware that there exists a SocketAppender that allows to write log events directly to a remote server using a TCP connection. Although including the Filebeat + Logstash combination may sound an unnecessary overhead to the logging pipeline, they provide a number of benefits in comparison to the Log4j socket alternative:

  • The SocketAppender relies on the specific serialization of Log4j’s LogEvent objects, which is no an interchangeable format as JSON, which is used by the Beats solution. Although there are attempts to output the logs in a JSON format for Logstash, it doesn’t support multiline logs, which results in messages being split into different events by Logstash. On the other hand, there is no official nor stable input plugin for Log4j version 2.
  • While enabling Log4j’s async logging mode in an application delegates logging operations to separate threads, given their coexistence in the same JVM there is still the risk of data loss in case of a sudden JVM termination without proper log channel closing.
  • Filebeat is a data shipper designed to deal with many constraints that arise in distributed environments in a reliable manner, therefore it provides options to tailor and scale this operation to our needs: the possibility to load balance between multiple Logstash instances, specify the number of simultaneous Filebeat workers that ship log files, and specify a compression level in order to reduce the consumed bandwidth. Besides that, logs can be shipped in specific batch sizes, with maximum amount of retries, and specifying a connection timeout.
  • Lastly, although Filebeat can forward logs directly to Elasticsearch, using Logstash as an intermediary offers the possibility to collect logs from diverse sources (e.g., system metrics).

Demo

This post is accompanied by a demo based on the Vert.x Microservices workshop, where each of them is shipped in a Docker container simulating a distributed system composed of independent addressable nodes.
Also, the ELK stack is provisioned using a preconfigured Docker image by Sébastien Pujadas.

Following the guidelines in this post, this demo configures each of the Microservices of the workshop, sets up a Filebeat process on each of them to ship the logs to a central container hosting the ELK stack.

Installation

In order to run this demo, it is necessary to have Docker installed, then proceed with:

  • Cloning or downloading the demo repository.
  • Separately, obtaining the source code of the branch of the Microservices workshop adapted for this demo.

Building the example

The Docker images belonging to the Vert.x Microservices workshop need to be built separately to this project before this project can be launched.

Building the Vert.x Microservices workshop Docker images.

Build the root project and the Trader Dashboard followed by each of the modules contained in the solution folder. Issue the following commands for this:

mvn clean install
cd trader-dashboard
mvn packagedocker:build
cd ../solution/audit-service
mvn packagedocker:build
cd ../compulsive-traders
mvn packagedocker:build
cd ../portfolio-service
mvn packagedocker:build
cd ../quote-generator/
mvn packagedocker:build

Running the example

After building the previous images, build and run the example in vertx-elk using the following command:

docker-compose up

The demo

You can watch the demo in action in the following screencast:

Conclusion

The ELK stack is a powerful set of tools that ease the aggregation of logs coming from distributed services into a central server. Its main pillar, Elasticsearch, provides the indexing and search capabilities of our log data. Also, it is accompanied by the convenient input/output components: Logstash, which can be flexibly configured to accept different data sources; and Kibana, which can be customized to present the information in the most convenient way.

Logstash has been designed to work seamlessly with Filebeat, the log shipper which represents a robust solution that can be adapted to our applications without having to make significant changes to our architecture. In addition, Logstash can accept varied types of sources, filter the data, and process it before delivering to Elasticsearch. This flexibility comes with the price of having extra elements in our log aggregation pipeline, which can represent an increase of processing overhead or a point-of-failure. This additional overhead could be avoided if an application would be capable of delivering its log output directly to Elasticsearch.

Happy logging!

Vert.x 3.3.3 is released !

$
0
0

We have just released Vert.x 3.3.3, a bug fix release of Vert.x 3.3.x.

Since the release of Vert.x 3.3.2, quite a few bugs have been reported. We would like to thank you all for reporting these issues.

Vert.x 3.3.3 release notes:

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Docker images are also available on the Docker Hub. The Vert.x distribution is also available from SDKMan and HomeBrew.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Happy coding !

Vert.x featuring Continuous Delivery with Jenkins and Ansible

$
0
0

This blog entry describes an approach to adopt Continuous Delivery for Vert.x applications using Jenkins and Ansible by taking advantage of the Jenkins Job DSL and Ansible plugins.

Table of contents

Preamble

This post was written in context of the project titled “DevOps tooling for Vert.x applications“, one of the projects at Vert.x taking place during the 2016 edition of Google Summer of Code, a program that aims to bring students together with open source organizations in order to help them to gain exposure to software development practices and real-world challenges.

Introduction

System configuration management (e.g., Ansible) has been really hype in the recent years and there is a strong reason for that. Configuration management facilitates configuring a new environment following a fixed recipe or slightly varying it with the help of parameters. This has not only the advantage of being able to do it more frequently but reduces the chance of errors than doing it manually.
Beyond that, combining it with Continuous Integration tools (e.g., Jenkins) allows making a deployment as soon as a new codebase version is available, which represents the main building block of a Continuous Delivery pipeline, one of the objectives of embracing a DevOps culture.

Given that Vert.x is a framework that consists in a few libraries which can be shipped within a single fat jar, adopting a DevOps culture while developing a Vert.x-based application is straightforward.

Overview

As seen in the diagram below, this post describes a method to define a Jenkins build job which will react to changes in a code repository. After succesfully building the project, the job will execute an Ansible playbook to deploy the new application version to the hosts specified within the Ansible configuration.

Overview of the continous delivery process

Creating a Jenkins build job using Job DSL

Jenkins has created a convenient way to define build jobs using a DSL. While this option avoids the hassle of configuring build jobs manually, it supports all features of the regular interface through its API. It is possible to use Ansible together with Jenkins with the help of the Ansible plugin, whose instructions are also included in the Job DSL API. Alternatively to the Job DSL Plugin, Ansible can be used inside the definition of Jenkins Pipeline, one of tool’s most recent features.

Below is a sample job definition which can be used after creating a freestyle job (seed job) and adding a new build step with the DSL script. In the script, there are a few things to notice:

  • A name for the job created by the seed job is given.
  • Specific versions of JDK, Maven, and Ansible (available in the environment) are used.
  • Git is selected as the SCM platform and the target repository is defined. Also, the build job is triggered according to a specific interval.
  • The Maven package goal is invoked, which is instructed to package the application into a fat jar.
  • Lastly, Ansible is used to call a playbook available in the filesystem. The app will be deployed to the defined target hosts and the credentials (configured in Jenkins) will be used to log into the target hosts. Additionally, enabling the colorizedOutput option will result in a friendlier formatting of the results in the console output. The contents of this playbook will be addressed in the next section.
job('vertx-microservices-workshop-job') {
    jdk('JDK8')
    scm {
        git('git://github.com/ricardohmon/vertx-microservices-workshop.git')
    }
    triggers {
        scm('*/15 * * * *')
    }
    steps {

      def mvnInst = 'M3.3.9'  
      maven {  
        goals('package')  
        mavenInstallation(mvnInst)  
      }  
      ansiblePlaybook('/ansible/playbook.yml') {  
        inventoryPath('/ansible/hosts')  
        ansibleName('Ansible2.0')  
        credentialsId('vagrant-key')  
        colorizedOutput(true)  
      }  

    }  
}

Deploying Vert.x app using Ansible

An Ansible Playbook results quite convenient to deploy a Vert.x application to a number of hosts while still taking considerations for each of them. Below is a sample playbook that deploys the respective application to each of the hosts described in an inventory file. The playbook comprises the following tasks and takes the listed considerations:

1) A task that targets only hosts with a database.

  • The target hosts is specified with the name of the host (or hosts group) defined in the inventory file.

2) Actual application deployment task. Here, several considerations are done:

  • The application may require that only one host is updated at the time.
    This can be achieved with the serial option, while the order of the deployment to hosts can be enforced in the hosts option.
    Host processing order
    Even though we could have declared all hosts, Ansible does not provide an explicit way to specify the order.
  • Java is a system requirement for our Vert.x applications.
    Besides installing it (keep reading), we need to declare the JAVA_HOME environment variable.
  • A deployment may just represent an update to an already running application (Continuous Deployment), hence it is convenient to stop the previous application inside the pre_tasks and take post-deployment actions in the post_tasks. Vert.x ships with the convenient start/stop/list commands that result very helpful here. We can use the list command and extract (using regex) the id of the running application of its output to stop it before deploying a new version.
    Hint
    If our solution includes a load balancer or proxy, we could deal with them at this step as described in Ansible’s best practices for rolling updates
  • Call to a role that makes the actual application deployment. The Jenkins Ansible Plugin includes, between others, a WORKSPACE environment variable, which may result very helpful in the following tasks, as shown later.
# 1) Special task for the service with a db
- hosts: audit-service
  remote_user: vagrant
  become: yes
  roles:
    - db-setup

  # 2) Common tasks for all hosts
- hosts: quote-generator:portfolio-service:compulsive-traders:audit-service:trader-dashboard
  remote_user: vagrant
  become: yes
  serial:1  environment:    JAVA_HOME:/usr/lib/jvm/jre-1.8.0-openjdk/

  pre_tasks:
  - name: Check if the app jar exists in the target already
    stat: path=/usr/share/vertx_app/app-fatjar.jar
    register: st
  - name: List running Vert.x applications
    command: java -jar /usr/share/vertx_app/app-fatjar.jar list
    register: running_app_list
    when: st.stat.exists == True
  - name: Stop app if it is already running (avoid multiple running instances)
    command: java -jar /usr/share/vertx_app/app-fatjar.jar stop {{ item | regex_replace('^(?P.[8]-.[4]-.[4].[4].[12])\t.*', '\\g') }}
    with_items:"{{ running_app_list.stdout_lines|default([]) }}"    when: st.stat.exists == True and (item | regex_replace('.*\t(.*)$', '\\1') | match('.*/app-fatjar.jar$'))

  # Main role
  roles:
    - { role: vertx-app-deployment, jenkins_job_workspace:"" }

  post_tasks:
  - name: List again running Vert.x applications
    command: java -jar /usr/share/vertx_app/app-fatjar.jar list

Once we took care of the actions shown before, the remaining tasks (included in the main deployment role) reduce to the following:

1) Prepare the target machine with the proper environment to run our application. This includes:

  • Set up Java (pretty convenient to do it through a package manager).
  • Copy the Vert.x application package to the appropriate folder (quite simple using a fat jar). The actual name and location of the jar package in the Jenkins environment can be defined using host-specific variables.
  • In case necessary, copy the required config files.
- name: Install Java 1.8andsome basic dependencies
  yum: name={{ item }} state=present
  with_items:
   - java-1.8.0-openjdk
- name: Ensure app dir exists
  file: path=/usr/share/vertx_app/ recurse=yes state=directory mode=0744
- name: Copy the Vert.x application jar package
  copy: src={{ app_jar }} dest=/usr/share/vertx_app/app-fatjar.jar mode=0755
- name: Ensure config dir exists
  file: path=/etc/vertx_app/ recurse=yes state=directory mode=0744
- name: Copy theapplication config fileif needed
  copy: src={{ app_config }} dest=/etc/vertx_app/config.json mode=0755
  when: app_config is defined

2) Run the application as a service in the hosting machine.

  • Make sure to ignore the hang up signal with the help of nohup command. Otherwise, Ansible will be stuck at this step.
- name: Run Vert.x application as a service, ignore the SIGHUP signal
  shell: nohup java {{ vertx_opts }} -jar /usr/share/vertx_app/app-fatjar.jar start {{ launch_params }}
  register: svc_run_out
- name: Print run output
  debug: var=svc_run_out.stdout_lines

Launching the Vert.x app
This example uses the start command to launch the application as a service. This method may result more comfortable than creating an init.d script or calling Vert.x from command line, which would have required to install the Vert.x libraries in an independent Ansible task.

This describes all the configuration needed to be able to build from a repository using Jenkins and deploy the results to our hosts with Ansible.

Sample sources and demo

The sample configurations presented before are part of a complete demo focused on the Vert.x microservices workshop to exemplify a basic Continuous Delivery scenario. This set up is available in a repository and contains, in addition, a pre-configured Jenkins-based demo ready to host the build job described the previous sections. The demo scenario requires Vagrant and Virtualbox to be launched.

Launch instructions

  • Clone or download this repository, and launch the demo using vagrant up

    git clone https://github.com/ricardohmon/vertx-ansible.git
    cd demo
    vagrant up
    This command will launch a virtual machine hosting Jenkins with the required plugins installed (tools names needed) and also launch five additional VMs that will host the microservices deployed by Jenkins.
  • Create a Jenkins freestyle build job using the DSL job script (seed job) found in deployment-jobs/microservices_workshop_dsl.groovy and build it.

    Tool configuration assumption
    The DSL Job assumes the following tools (with names) have been configured in Jenkins: Java 8(JDK8), Maven (M3.3.9), Ansible (Ansible2.0)
  • After building the seed job, a new job(vertx-microservices-workshop-job) will be created, which will be in charge of pulling recent changes of the project, building it, and deploying it.

Demo

Watch the previous demo in action in the following screencast:

Conclusion

Continuous Delivery approach is a must in modern software development lifecycles (including Vert.x-based applications) and a step further towards adopting a DevOps culture. There are a number of tools that enable it and one example is the combination of Jenkins + Ansible described in this post.
While Jenkins offers the possibility to integrate recent changes perceived in a codebase and build runnable artifacts, Ansible can help to deploy them to hosting environments. The usage of both tools can be coupled easily with the help of the Job DSL plugin, a feature of Jenkins that allows describing a build job using a domain-specific language, which can help to integrate additional steps and tools to a CD pipeline.

Further enhancements can be done to this basic pipeline, such as, integrating the recent Pipeline plugin, a feature that allows a better orchestration of CD stages; inclusion of notification and alerting services; and, ultimately a zero-downtime deployment approach, which could be achieved with the help of a proxy; plus, tons of options available trough Jenkins plugins.

Thanks for reading!

OAuth2 got easy

$
0
0

Oauth2 support exists in Eclipse Vert.x since version 3.2.0. The implementation follows the principles that rule the whole vert.x ecosystem: unopinionated, it does what you want it to do, simple but not too simple.

This works fine because OAuth2 is a widely spread standard and vendors adhere to it quite well. However due to the API and the details of the specification it requires some knowledge on what kind of flow your application needs to support, what are the endpoints for authorizing and getting tokens. This information, even though easily accessible to anyone who’s got the time and will, to read the vendor documentation is easy to find, but it means that developers would need to spend time in a non-project problem-related task.

Vert.x thrives for being fast and productive, so what if we could help you focusing on your development tasks rather than reading Oauth2 provider documentation? This is what you can expect for the next release.

Out of the box you will find out that you can instantiate an OAuth2 provider as easy as:

Provider.create(vertx, clientId, clientSecret)

That’s it! simple, to the point, sure it makes some assumptions, it assumes that you want to use the “AUTH_CODE“ flow which is what you normally do for web applications with a backend.

The supported Provider implementations will configure the base API (which will be still available) with the correct URLs, scope encoding scheme or extra configuration such as “shopId“/“GUID“ for Shopify/Azure AD.

So what supported Providers can you already find?

That’s a handful of Providers, but there is more. Say that you want to ensure that your SSL connections are valid and want to control the certificate validation. Every provider also accepts a HttpClientOptions object that will be used internally when contacting your provider, so in this case, you have full security control of your connection, not just defaults.

You can expect this new code to land for 3.4 as it is not available in the current release (3.3.3).

Getting started with new fabric8 Vert.x Maven Plugin

$
0
0

The all new fabric8 Vert.x Maven Plugin allows you to setup, package, run, start, stop and redeploy easily with a very little configuration resulting in a less verbose pom.xml.

The plugin is developed under the fabric8 umbrella

Traditionally Vert.x applications using Apache Maven need to have one or more of the following plugins:

  • Maven Shade Plugin - aids in packaging a uber jar of Vert.x application with additional configurations to perform SPI combining, MANIFEST.MF entries etc.,
  • Maven Exec Plugin - aids in starting the Vert.x application
  • Maven Ant Plugin - aids in stopping the running Vert.x application

Though these are great plugins and do what is required, but at the end of the day the developer is left with a verbose pom.xml which might become harder to maintain as the application or its configuration grows. Even if we decide to go this way and use those plugins, there are some things which can’t done or done easily:

  • run an application on foreground - which is a typical way during development where the application starts in foreground of Apache Maven build and killed automatically once we hit Ctrl + c(or CMD + c on Mac)
  • redeploy is one of the coolest feature of Vert.x allowing us to perform hot deployments. Still we can manage to do this with IDE support but not natively using Apache Maven - typical cases where we disable Automatic Builds via IDE
  • setup Vert.x applications with sensible defaults and required Vert.x dependencies e.g. vertx-core

In this first blog of fabric8 Vert.x Maven Plugin series we will help you to get started with this new fabric8 Vert.x Maven Plugin, highlighting how this plugin helps alleviating the aforementioned pain points with a less verbose pom.xml.

The Apache Maven plugin source code is available at github with Apache Maven plugin documentation available at fabric8 Vert.x Maven Plugin

The source code of the examples used in this blog are available at github

Let’s set it up

Its very easy to setup and get started. Let’s say you have a project called vmp-blog with the following content as part of your pom.xml

from the project directory just run the following command:

mvn io.fabric8:vertx-maven-plugin:1.0.0:setup

On successful execution of the above command the project’s pom.xml will be updated:

The command did the following for you on the project:

  • added couple of properties
    • fabric8.vertx.plugin.version - the latest fabric8 vert.x maven plugin version
    • vertx.version - the latest Vert.x framework version
  • added the Vert.x dependency BOM and vertx-core dependency corresponding to vertx.version
  • added vertx-maven-plugin with a single execution for goals initialize and package

The source code created by this step is available here

Et voilà, you are now all set to go with your Vert.x application building with Apache Maven!!

Let’s package it

Now that we have set up our project to use vertx-maven-plugin, lets add a simple verticle and package the Vert.x application as typical uber jar (in the Vert.x world we call them fat jars). The source code of this section is available here.

To make package work correctly we need to add property called vertx.verticle, which will be used by the vertx-maven-plugin to set the Main-Verticle: attribute of the MANIFEST.MF. Please refer to the documentation of package for other possible configurations. There is also a examples section of the vertx-maven-plugin which provides various samples snippets.

The updated pom.xml with the added property vertx-maven-plugin is shown below:

Only updated section is shown below, rest of the pom.xml is same as above

To package the Vert.x application, run the following Apache Maven command from the project directory:

mvn clean package

On successful run of the above command you should see the file with name ${project.finalName}.jar created in the ${project.build.directory}, you could now do the following to start and run the Vert.x application.

java -jar ${project.build.directory}/${project.finalName}.jar

The generated MANIFEST.MF file is as shown below:

Main-Class                               io.vertx.core.Launcher
Main-Verticle                            io.fabric8.blog.MainVerticle
Manifest-Version                         1.0

The source code up to now is available in here

SPI Combination

The package goal by default does a SPI combination, lets say you have a service file called com.fasterxml.jackson.core.JsonFactory in ${project.basedir}/src/main/resources/META-INF/services with contents:

foo.bar.baz.MyImpl
${combine}

During packaging, if the fabric8 Vert.x Maven Plugin finds another com.fasterxml.jackson.core.JsonFactory service definition file within the project dependencies with content foo.bar.baz2.MyImpl2, then it merges the content into com.fasterxml.jackson.core.JsonFactory of ${project.basedir}/src/main/resources/META-INF/services, resulting in the following content:

foo.bar.baz.MyImpl
foo.bar.baz2.MyImpl2

The position of ${combine} controls the ordering of the merge, since we added ${combine} below foo.bar.baz.MyImpl all other SPI definitions will be appended below foo.bar.baz.MyImpl

What’s next ?

It’s good to have the jar packaged and run using java -jar uber-jar, but when doing typical development you don’t want to do frequent Apache Maven packaging and wish to see your changes automatically redeployed.

Don’t worry!!! As part of fabric8 Vert.x Maven Plugin we have added the incremental builder to Apache Maven build, which will watch for your source and resource changes to perform automatic re-build and delegate the redeployment to Vert.x.

Run, redeploy and other features of the fabric8 Vert.x Maven Plugin will be explored in detail in the next part of this series, until then have fun with fabric8 Vert.x Maven Plugin!!


Internet of Things - Reactive and Asynchronous with Vert.x

$
0
0

Vert.x IoT

this is a re-publication of the following blog post.

I have to admit … before joining Red Hat I didn’t know about the Eclipse Vert.x project but it took me few days to fall in love with it !

For the other developers who don’t know what Vert.x is, the best definition is …

… a toolkit to build distributed and reactive systems on top of the JVM using an asynchronous non blocking development model

The first big thing is related to develop a reactive system using Vert.x which means :

  • Responsive : the system responds in an acceptable time;
  • Elastic : the system can scale up and scale down;
  • Resilient : the system is designed to handle failures gracefully;
  • Asynchronous : the interaction with the system is achieved using asynchronous messages;

The other big thing is related to use an asynchronous non blocking development model which doesn’t mean to be multi-threading but thanks to the non blocking I/O (i.e. for handling network, file system, …) and callbacks system, it’s possible to handle a huge numbers of events per second using a single thread (aka “event loop”).

You can find a lot of material on the official web site in order to better understand what Vert.x is and all its main features; it’s not my objective to explain it in this very short article that is mostly … you guess … messaging and IoT oriented :-)

In my opinion, all the above features make Vert.x a great toolkit for building Internet of Things applications where being reactive and asynchronous is a “must” in order to handle millions of connections from devices and all the messages ingested from them.

Vert.x and the Internet of Things

As a toolkit, so made of different components, what are the ones provided by Vert.x and useful to IoT ?

Starting from the Vert.x Core component, there is support for both versions of HTTP protocol so 1.1 and 2.0 in order to develop an HTTP server which can expose a RESTful API to the devices. Today , a lot of web and mobile developers prefer to use this protocol for building their IoT solution leveraging on the deep knowledge they have about the HTTP protocol.

Regarding more IoT oriented protocols, there is the Vert.x MQTT server component which doesn’t provide a full broker but exposes an API that a developer can use in order to handle incoming connections and messages from remote MQTT clients and then building the business logic on top of it, so for example developing a real broker or executing protocol translation (i.e. to/from plain TCP,to/from the Vert.x Event Bus,to/from HTTP,to/from AMQP and so on). The API raises all events related to the connection request from a remote MQTT client and all subsequent incoming messages; at same time, the API provides the way to reply to the remote endpoint. The developer doesn’t need to know how MQTT works on the wire in terms of encoding/decoding messages.

Related to the AMQP 1.0 protocol there are the Vert.x Proton and the AMQP bridge components. The first one provides a thin wrapper around the Apache Qpid Proton engine and can be used for interacting with AMQP based messaging systems as clients (sender and receiver) but even developing a server. The last one provides a bridge between the protocol and the Vert.x Event Bus mostly used for communication between deployed Vert.x verticles. Thanks to this bridge, verticles can interact with AMQP components in a simple way.

Last but not least, the Vert.x Kafka client component which provides access to Apache Kafka for sending and consuming messages from topics and related partitions. A lot of IoT scenarios leverage on Apache Kafka in order to have an ingestion system capable of handling million messages per second.

Conclusion

The current Vert.x code base provides quite interesting components for developing IoT solutions which are already available in the current 3.3.3 version (see Vert.x Proton and AMQP bridge) and that will be available soon in the future 3.3.4 version (see MQTT server and Kafka client). Of course, you don’t need to wait for their official release because, even if under development, you can already adopt these components and provide your feedback to the community.

This ecosystem will grow in the future and Vert.x will be a leading actor in the IoT applications world based on a microservices architecture !

Building services and APIs with AMQP 1.0

$
0
0

Microservices and APIs are everywhere. Everyone talks about them, presentation slides are full of them … some people are actually even building them. Microservices and APIs are of course not completely new concepts and they are a bit over-hyped. But in general the ideas behind them are not bad. Unfortunately, many people seem to believe that the only way how to implement an API in microservice is to use HTTP and REST. That is of course not true. Microservices and APIs can be based on many different protocols and technologies. My favorite one is of course AMQP. Don’t take me wrong, HTTP and REST is not necessarily bad. But in some cases AMQP is simply better and creating AMQP based APIs does not need to be complicated.

this is a re-publication of the following blog post

LiveScore service

For demonstration, I will use a very simple service for keeping scores of football games. It has very basic API. It has only three calls:

  • Add a new game
  • Update a score of existing game
  • List the scores The AMQP variants will be additionally able to push live updates to the clients.

The demo is using Java and Vert.x toolkit. Vert.x is cool and I definitely recommend it to everyone. But most of the stuff from the demo should be possible also in any other programming languages and/or framework.

HTTP API

HTTP implementation of my service is a typical REST API. Since it is very simple, it accepts requests only on one endpoint – /api/v1.0/scores. New games are added as POST operations, scores are updated with PUT operations and list of all scores can be obtained with GET.

With Vert.x, creating HTTP/REST API is very easy. First the web router has to be created with all planned API calls:

router = Router.router(vertx);  
router.route("/api/v1.0/*").handler(BodyHandler.create());  
router.get("/api/v1.0/scores").handler(this::getScores);  
router.post("/api/v1.0/scores").handler(this::addGame);  
router.put("/api/v1.0/scores").handler(this::setScore);

Then the HTTP server has to be created and linked with the router:

HttpServerOptions httpOptions = new HttpServerOptions();  
server = vertx.createHttpServer(httpOptions)  
   .requestHandler(router::accept)  
   .listen(httpPort);

And finally the handlers which will be triggered for each API call have to be implemented as well. The full code is on GitHub.

HTTP based API

The HTTP API doesn’t provide any way how to automatically push the score updates to the clients. The clients simply have to poll the service periodically to get the updates. HTTP has of course some ways how to push live updates to clients. For example, with WebSockets or with chunked transfers. However, these are not that easy to implement. The service would also need to keep separate connection with every client and push the updates for each of them separately.

AMQP API

Creating the HTTP API was really easy. Creating an AMQP API has to be more complicated, right? We would need an AMQP server, which will listen on some port, accept the connections, sessions, links and so on. There are usually no nice and simple to use libraries for this.

Sure, this is one way how to do it. There is actually a nice library called Apache Qpid Proton. It has Java and C versions and bindings into many other languages (Go, C++, Python, …). It makes creating your own AMQP server lot easier. It will take care of decoding and encoding the AMQP protocol, handling the connections, sessions etc. But still, Qpid Proton is not even nearly as easy to use as the HTTP router used for the HTTP API.

API with AMQP server

Are there any easier options? What if all what is needed to create AMQP based API is a simple AMQP client? Normally, that should not be a possible because we need the API to listen on some port for the clients to connect to it and send requests. And clients usually don’t listen on any ports. However, Apache Qpid has something called Dispatch. It works as a lightweight AMQP router. Dispatch will serve as the AMQP server which was missing. It will take care of handling client connections, security and shield the service from the actual clients. All the service needs to do is to use AMQP client to connect to Dispatch on predefined address and wait for the request.

AMQP API with Dispatch router

Dispatch needs to be configured with three API entry points as addresses:

address{  
    prefix: /setScore  
    distribution: balanced  
}  
address{  
    prefix: /getScore  
    distribution: balanced  
}  
address{  
    prefix: /addGame  
    distribution: balanced  
}

LiveScore service will connect to these addresses as a receiver / consumer. Clients will connect to them as senders /producers. And Dispatch will take care of routing the messages between the clients and the service. Clients can also create additional receivers so that the service is able to respond to their requests and specify the address of the receiver as the reply-to header in the request message. LiveScore service will automatically send the response to this address. But specifying a reply-to is not mandatory. If the client wants, it can simply fire the request and forget about the response.

LiveScore service is using Vert.x AMQP Bridge which allows easy integration between the Vert.x Event Bus and the AMQP connection to my router. The service starts the AMQP Bridge and if it successfully connects to Dispatch it creates three receivers for the API calls.

AmqpBridgeOptions options = new AmqpBridgeOptions().addEnabledSaslMechanism("ANONYMOUS");  
bridge = AmqpBridge.create(vertx, options);  
bridge.start(amqpHostname, amqpPort, res -> {  
   if (res.succeeded())  
   {  
     bridge.createConsumer("/setScore").setMaxBufferedMessages(100).handler(this::setScore);  
     bridge.createConsumer("/getScores").setMaxBufferedMessages(100).handler(this::getScores);  
     bridge.createConsumer("/addGame").setMaxBufferedMessages(100).handler(this::addGame);  
     fut.complete();  
   }  
   else  
   {  
     fut.fail(res.cause());  
   }  
});

The only other thing which needs to be done is creating handlers for handling the requests received from clients:

publicvoidgetScores(Message msg){  
   if(msg.replyAddress() != null)  
   {  
     JsonObject response = new JsonObject();  
     response.put("application_properties", new JsonObject().put("status", 200));  
     response.put("body", new JsonArray(Json.encode(scoreService.getScores())).encode());  
     msg.reply(response);  
   }  
   else  
   {  
     LOG.warn("Received LiveScore/getScores request without reply to address");  
   }  
}

Live broadcasting of score updates is also very easy. New address has to be added into Dispatch configuration. This address will be used in opposite direction. the service connects to it as sender / producer and clients which want to receive the live updates create a receiver against this address. What is important, this address has to be marked as multicast. Thanks to that every single message will be delivered to all connected clients and not just to one of them:

address{  
    prefix: /liveScores  
    distribution: multicast  
}

Multicasting messages

Thanks to the multicast distribution, the service doesn’t need to send a separate update to every single client. It sends the message only once and dispatch takes care of the rest.

publicvoidbroadcastUpdates(Game game){  
   LOG.info("Broadcasting game update " + game);  
   JsonObject message = new JsonObject();  
   message.put("body", new JsonObject(Json.encode(game)).encode());  
   producer.send(message);  
}

Again, the complete source codes of the demo service are available on GitHub.

How to structure AMQP APIs?

Compared to HTTP and REST, AMQP gives its users a lot more freedom when designing the API. It isn’t tied up by the available HTTP methods.

My LiveScore service is using the API endpoints named according to their function:

  • /LiveScore/addGame
  • /LiveScore/setScore
  • /LiveScore/getScores It also uses HTTP status codes in application properties of the different messages to describe the result of the request and JSON as the message payload with the actual request and response.

Is that the best way? To be honest, I don’t know. Just for the request encoding there are many different options. AMQP has its own encodings which supports all possible basic as well as more advanced data types and structures. But AMQP can also transfer any opaque data - be it JSON, XML, Google Protocol Buffers or anything else. For simple request, the payload can be completely skipped and application properties can be used instead. And for everyone who really loves HTTP/REST, one can also model the API in REST style as I did in an alternative implementation of my demo service.

Browser

One of the environments where HTTP is so to say “at home” is browser. AMQP will probably never be as “native” protocol for any browser as HTTP is. However AMQP can be used even from browsers. It has WebSocket binding and there are Javascript AMQP libraries - for example rhea. So AMQP can be also used really everywhere.

Decoupling

It is important to mention that the Dispatch router doesn’t decouple the client from the service. If decoupling is what is needed, it can be easily achieved by replacing the Dispatch router with some AMQP broker. The broker would decouple the client from the service without any changes in the service or clients.

Conclusion

While creating APIs using AMQP can be very easy, it doesn’t mean that AMQP is the best protocol for all APIs. There are definitely APIs where HTTP is more suitable. But in some use cases, AMQP has clear advantages. In my LiveScore example it is especially one to many communication. It is important to keep the mind open and select the best available for given service.

An Introduction to the Vert.x Context Object

$
0
0

Under the hood, the vert.x Context class plays a critical part in maintaining the thread-safety guarantees of verticles. Most of the time, vert.x coders don’t need to make use of Context objects directly. However, sometimes you may need to. This article provides a brief introduction to the vert.x Context class, which covers why it’s important, and why and when you might wish to make use of the Context directly, based on the author’s experience of building a generic async library which can be used with vert.x.

this is a re-publication of the following blog post

The Context object in Vert.x - a brief introduction

Introduction

Recently I’ve been looking at the possibility of building an asynchronous version of the pac4j library, with a view to then migrating the vertx-pac4j implementation to use the asynchronous version of pac4j by default.

I’m keen (for obvious reasons) that the async version of pac4j is not tightly coupled to one particular asynchronous/non-blocking framework, I decided to expose the API via the CompletableFuture class, using this to wrap values which will be determined in the future. However, I opted to use the vert.x framework for my asynchronous testing as a way of testing the API as it emerged. This in turn has led me to learn some aspects of the vert.x Context class which I didn’t really understand before.

The information presented relates to Vert.x version 3.3.3. It is conceivable that later versions of vert.x could render aspects of this article incorrect.

Introduction to the Context class

Whenever a vert.x Handler is executed, or the start or step method of a verticle is called, then that execution is associated with a specific context. Generally a context is an event-loop context and is therefore associated with an event loop thread (exceptions are covered in the Further Reading referenced below). Contexts are propagated. When a handler is set by code running on a specific context, then that handler will also be executed on the same context. This means for example, that if the start method of a verticle instance sets a number of event bus handlers (as many do), then they will all run on the same context as the start method for that verticle (so all handlers for that verticle instance will share a common context).

A schematic of the relationships between non-worker verticles, contexts and eventloop threads is shown in Figure 1.

Vertx Context/Thread/Verticle Relationships

Note that each verticle effectively has only one context for handlers created by its start method, and each context is bound to a single event-loop thread. A given event-loop thread can, however, have multiple contexts bound to it.

When are contexts not propagated?

When a verticle’s start method is called, a new context is created. If 4 identical verticles are deployed via the instances parameter on DeploymentOptions, the start method of each will be on a new context. This is logical as we may not want all non-worker verticles to be bound to a single eventloop thread when multiple eventloop threads are available.

Threading Guarantees

There are certain consequences of the propagation of contexts to handlers as mentioned above. The most important one is that since all handlers in a given eventloop verticle run on the same context (the one on which its start method ran), they all run on the same eventloop thread. This gives rise to the threading guarantee within vert.x, that as long as a given verticle is the only one to ever access a piece of state, then that state is being accessed by only one thread, so no synchronization will be necessary.

Exception Handling

Each context can have its own exception handler attached for handling exceptions which occur during event loop processing.

Why might you not want the default exception handler?

As one example, you might have some verticles running whose job it is to monitor other verticles, and if something appears to go wrong with them, undeploy and restart them, a frequent pattern in an actor- or microservices- style archictecture. So one option could be that when a supervised verticle encounters an unrecoverable error, it could simply notify its supervisor that it has gone wrong via an eventbus message, and its supervisor could then undeploy and redeploy (and after a number of failures in rapid succession possibly give up hope or escalate to its own supervisor.

Going off-context and getting back onto a particular context

There are several reasons why you might execute code off-context and then want to operate back on a vert.x context when complete. I’ll outline a couple of scenarios below

Running code on a separate thread

Firstly you might be using an asynchronous driver which is entirely vertx-unaware. Its code will run on non-eventloop threads but it’s possible you may then want to use the results of that code to update information within your verticle. If you don’t get back onto the correct context, you can’t make any guarantees about thread-safety, so your subsequent processing needs to be run back on the correct eventloop thread.

Using asynchronous Java 8 APIs

APIs such as CompletableFuture are context-unaware. In one example, I created an already completed future on the vert.x event loop in a test. I then attached subsequent processing to it via thenRun:-

@RunWith(VertxUnitRunner.class)
publicclassImmediateCompletionTest{
    @Rulepublicfinal RunTestOnContext rule = new RunTestOnContext();

    @TestpublicvoidtestImmediateCompletion(TestContext context){

        final Async async = context.async();
        final Vertx vertx = rule.vertx();
        final CompletableFuture toComplete = new CompletableFuture<>();
        // delay future completion by 500 msfinal String threadName = Thread.currentThread().getName();
        toComplete.complete(100);
        toComplete.thenRun(() -> {
            assertThat(Thread.currentThread().getName(), is(threadName));
            async.complete();
        });
    }
}

Naively one might expect this to automatically run on the context, since it hasn’t left the eventloop thread on which the future was completed, and indeed it’s provable that it is on the correct thread. However, it will not be on the correct context. This would mean that it wouldn’t, for example, invoke any modified exception handler attached to the context.

Getting back on context

Fortunately, once we’ve left the context, it’s quite straightforward to return to it. Prior to definition of the code block within thenRun, we can use Vertx.currentContext() or vertx.getOrCreateContext() to get a handle to the context on which our eventloop code is running, We can then execute the code block inside a call to Context::runOnContext, similar to

final Context currentContext = vertx.getOrCreateContext();
toComplete.thenRun(() -> {
        currentContext.runOnContext(v -> {
        assertThat(Thread.currentThread().getName(), is(threadName));
        async.complete();
    }
});
While getting back onto the correct context may not be critical if you have remained on the event loop thread throughout, it is critical if you are going to invoke subsequent vert.x handlers, update verticle state or anything similar, so it’s a sensible general approach.

Further Reading

The vert.x team themselves offer an excellent blog about the Vert.x eventloop, with excellent material on the context, on Github.

Thanks

Thanks very much to the vert.x core team for their clear github pages on the eventloop, and also to Alexander Lehmann for his answers to my stupid and naive questions on the Vert.x google group.

Vert.x 3.4.0.Beta1 release

$
0
0

TL;DR

we have released 3.4.0.Beta1, this release is the biggest since Vert.x 3.0.0 with plenty of great futures.

You can use consume it in your projects from Maven or Gradle as usual with the version 3.4.0.Beta1 or read

Let me outline the important changes you can already find in this Beta1.

Vert.x Web Client

in a simple sentence “Vert.x Web Client is to Vert.x HttpClient what Vert.x Web is to HttpServer”

The Web Client makes easy to do HTTP request/response interactions with a web server, and provides advanced features like:

  • Json body encoding / decoding
  • request/response pumping
  • request parameters
  • unified error handling
  • form submissions
  • and more!

Built on top of HttpClient, it naturally inherits its features and provides a better API, let me give an overview in one example:

WebClient client = WebClient.
client
  .get(8080, "myserver.mycompany.com", "/some-uri")
  .as(BodyCodec.json(User.class))
  .send(ar -> {
    if (ar.succeeded()) {

      HttpResponse<User> response = ar.result();
      User user = response.body();

      System.out.println("Received response with status code" + response.statusCode() + " with body " +
        user.getFirstName() + "" + user.getLastName());
    } else {
      System.out.println("Something went wrong " + ar.cause().getMessage());
    }
  });

RxJava singles

RxJava is a very popular Java extension and in this release we focused on the API usability with the support of the Single RxJava type.

The new methods are prefixed by rx and deprecates the Observable suffixed methods.

So instead of starting a server with listenObservable now you use rxListen:

HttpServer server = vertx.createHttpServer();
Single single = server.rxListen(8080, "localhost");
single.subscribe(
  ok -> System.out.println("Server started"),
  err -> System.out.println("Something went wrong " + err.getMessage()));

One noticeable difference with the previous API, is that the listen method is called when the Single is subscribed.

This is very handy when combined with the new web client:

Single> single = client
   .get(8080, "myserver.mycompany.com", "/some-uri")
   .rxSend();

//Send the request
single.subscribe(response ->System.out.println("got response " + response.statusCode());

//Send the request again
single.subscribe(response ->System.out.println("got response " + response.statusCode());

Polyglot

In this beta you can try Vert.x for Kotlin.

Vert.x for Kotlin is based on the Java API and provides also the execution of Kotlin Verticles.

import io.vertx.core.*import io.vertx.kotlin.core.http.HttpServerOptionsclassServer : AbstractVerticle() {

  override fun start() {
    vertx.createHttpServer(

        // We provide Kotlin extension methods, allowing to use an idiomatic Kotlin API for building these options
        HttpServerOptions(
            port = 8080,
            host = "localhost"
        ))
        .requestHandler() { req ->
          req.response().end("Hello from Kotlin")
        }
        .listen()
    println("Server started on 8080")
  }
}

It can be directly ran from the command line:

julien:vertx-kotlin-example julien$ vertx run Server.kt
Server started on8080
Succeeded in deploying verticle

As you can see, Kotlin is using directly the Java API and we thought that it might be a cool thing to do the same with Groovy support. So we have reconsidered our Groovy support and now it uses the plain Java API, without loosing the existing features.

Thanks to Groovy extension methods, idiomatic Groovy it still supporting while benefiting of the full Java API!

Scala support is also planned for 3.4.0 and will be released soon, watch @vertx_project.

The microservices story goes on…

Our API have matured and now they have been moved out of tech preview, of course this wasn’t enough and we now have Vert.x Config, an extensible way to configure Vert.x applications supporting File, json, ENV, system properties, HTTP, Kubernetes Configmap, Consul, Spring Config Server, Redis, Git, Zookeeper, … stores as well as several formats: properties file, YAML and Hocon.

Here is a small example:

ConfigStoreOptions httpStore = new ConfigStoreOptions()
  .setType("http")
  .setConfig(new JsonObject()
    .put("host", "localhost").put("port", 8080).put("path", "/conf"));

ConfigStoreOptions fileStore = new ConfigStoreOptions()
  .setType("file")
  .setConfig(new JsonObject().put("path", "my-config.json"));

ConfigStoreOptions sysPropsStore = new ConfigStoreOptions().setType("sys");

ConfigRetrieverOptions options = new ConfigRetrieverOptions()
  .addStore(httpStore).addStore(fileStore).addStore(sysPropsStore);

ConfigRetriever retriever = ConfigRetriever.create(vertx, options);

Vert.x Config also supports push based notification style:

ConfigRetriever retriever = ConfigRetriever.create(Vertx.vertx(), options);
retriever.configStream()
  .endHandler(v ->{
    // retriever closed
  })
  .exceptionHandler(t ->{
    // an error has been caught while retrieving the configuration
  })
  .handler(conf ->{
    // the configuration
  });

Vertx MQTT Server

Vert.x MQTT Server is able to handle connections, communication and messages exchange with remote MQTT clients. Its API provides a bunch of events related to protocol messages received by clients and exposes allow to send messages to them.

Here is a small effective example of creating, the Vert.x way!

MqttServerOptions options = new MqttServerOptions()
  .setPort(1883)
  .setHost("0.0.0.0");

MqttServer server = MqttServer.create(vertx, options);

server.endpointHandler(endpoint -> {

  System.out.println("connected client " + endpoint.clientIdentifier());

  endpoint.publishHandler(message -> {

    System.out.println("Just received message on [" + message.topicName() + "] payload [" +
      message.payload() + "] with QoS [" +
      message.qosLevel() + "]");
  });

  endpoint.accept(false);
});

server.listen(ar -> {
  if (ar.succeeded()) {
    System.out.println("MQTT server started and listening on port " + server.actualPort());
  } else {
    System.err.println("MQTT server error on start" + ar.cause().getMessage());
  }
});

Vert.x SQL streaming

We now supports streaming style for SQL queries:

connection.queryStream("select * from test", stream -> {
  if (stream.succeeded()) {
    SQLRowStream sqlRowStream = stream.result();

    sqlRowStream
      .handler(row -> {
        // do something with the row...
        System.out.println(row.encode());
      })
      .endHandler(v -> {
        // no more data available, close the connection
        connection.close(done -> {
          if (done.failed()) {
            throw new RuntimeException(done.cause());
          }
        });
      });
  }
});

And with the RxJava API:

client
  .rxGetConnection() // Connect to the database
  .flatMapObservable(conn -> { // With the connection...
    return conn.rxUpdate("CREATE TABLE test(col VARCHAR(20))") // ...create test table
      .flatMap(result -> conn.rxUpdate("INSERT INTO test (col) VALUES ('val1')")) // ...insert a row
      .flatMap(result -> conn.rxUpdate("INSERT INTO test (col) VALUES ('val2')")) // ...another one
      .flatMap(result -> conn.rxQueryStream("SELECT * FROM test")) // ...get values stream
      .flatMapObservable(sqlRowStream -> {
        return sqlRowStream.toObservable() // Transform the stream into an Observable...
          .doOnTerminate(conn::close); // ...and close the connection when the stream is fully read or an error occurs
      });
  }).subscribe(row ->System.out.println("Row : " + row.encode()));

Finally

In addition of all these brillants features here is a list of more-than-noticeable things you have in this Beta1:

  • Vert.x Infinispan replaces Vert.x Jgroups cluster manager
  • Vert.x Consul Client provides a full fledged client for Consul
  • Oauth2 predefined configuration with 16 settings from Azure Active Directory, to Twitter with the usual suspects (Facebook, LinkedIn, …)
  • Http client now follow redirects

You can use consume it in your projects from Maven or Gradle as usual with the version 3.4.0.Beta1 or read

Last but not least, I want to personally thank all the persons that contributed to this release, beyond the Vert.x core team, the actual Vert.x committers and many other persons have brought a lot of efforts in this upcoming 3.4.0!!!!

Vert.x 3.4.0 is released !

$
0
0

Vert.x 3.4.0 has just been released with many new exciting features!

Since the beginning Vert.x has provided a polyglot runtime, this version simply adds the support of two major languages of the JVM ecosystem: Scala 2.12 and Kotlin 1.1.

Some features are so important that they deserve to be taken to another level : the Vert.x Web Client focuses on usability features for building web applications. It actually builds upon the multi purpose and scalable Http Client and inherits all its features.

Vert.x RxJava is a very popular extension, 3.4 supports the rx.Single reactive type as well as Observablereactive pull back pressure, combined with Vert.x Web Client, it is a very powerful combo.

You can now get a stream for large result sets using JDBC client, with RxJava your stream becomes naturally an Observable.

When it comes to IoT, Vert.x is a relevant choice thanks to its unique toolkit approach that combines modularity and reduced footprint, there are no doubts that the new Vert.x MQTT Server extends Vert.x capabilities in this field!

Everyone knows Kafka, everyone loves Kafka, the new Vert.x Kafka Client gives you everything you need to use Kafka the Vert.x way!

On the microservices side, Vert.x gRPC will give a boost to your networking and Vert.x Config fills the gap in our toolbox. In addition we provide now a full fledged Vert.x Consul client!

During this release cycle, we paid special attention to security, ensuring that Vert.x-Web sessions are safe and follow the OWASP recommendations. Also, Vert.x web got many usability improvements, with a revised OAuth2 setup and a new htdigest authentication scheme.

Devops hasn’t been forgotten with Vert.x Health Check, a key feature in application monitoring.

On top of many bug fixes, here is a list of the most important new features you can find in 3.4.0:

  • Vert.x Infinispan is a new cluster option and supersedes the JGroups option
  • HTTP and Web client redirect handling
  • Zero-config service proxies generation with a processor classified jar
  • a new SelfSignedCertificate to make easy to create tests and demos with TLS/SSL
  • Hystrix metrics in the circuit breaker
  • Handlebars templates can now fully resolve properties passed to them
  • JsonObject POJO mapping convenience
  • Http compression level option
  • Groovy support now uses extension methods and does not generate wrappers anymore
  • Dropwizard match metrics can now have an alias
  • RxHelper method for adapting an Observable to a ReadStream
  • RxHelper method for adpating a Handler> to a Subscriber
  • provide Alpine and Busybox docker images

Vert.x 3.4.0 release notes:

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Docker images are also available on the Docker Hub. The Vert.x distribution is also available from SDKMan and HomeBrew.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Viewing all 158 articles
Browse latest View live