Quantcast
Channel: Vert.x
Viewing all 158 articles
Browse latest View live

Data-driven Apps made easy with Vert.x 3.4.0 and headless CMS Gentics Mesh

$
0
0

In this article, I would like to share why Vert.x is not only a robust foundation for the headless Content Management System Gentics Mesh but also how the recent release 3.4.0 can be used to build a template-based web server with Gentics Mesh and handlebars.

A headless CMS focuses on delivering your content through an API and allows editors creating and managing that data through a web-based interface. Unlike a traditional CMS, it does not provide a specifically rendered output. The frontend part (the head) is literally cut off, allowing developers create websites, apps, or any other data-driven projects with their favourite technologies.

Vert.x 3.4.0 has just been released and it comes with a bunch of new features and bug fixes. I am especially excited about a small enhancement that changes the way in which the handlebars template engine handle their context data. Previously it was not possible to resolve Vert.x ‘s JsonObjects within the render context. With my enhancement #509 - released in Vert.x 3.4.0 - it is now possible to access nested data from these objects within your templates. Previously this would have required flattening out each object and resolving it individually, which would have been very cumbersome.

I’m going to demonstrate this enhancement by showing how to build a product catalogue using Vert.x together with handlebars templates to render and serve the web pages. The product data is managed, stored and delivered by the CMS server as source for JSON data.

Clone, Import, Download, Start - Set up your product catalogue website quickly

Let’s quickly set up everything you need to run the website before I walk you through the code.

1. Clone - Get the full Vert.x with Gentics Mesh example from Github

Fire up your terminal and clone the example application to the directory of your choice.

git clone git@github.com:gentics/mesh-vertx-example.git

2. Import - The maven project in your favourite IDE

The application is set up as a maven project and can be imported in Eclipse IDE via File → Import → Existing Maven Project

3. Download - Get the headless CMS Gentics Mesh

Download the latest version of Gentics Mesh and start the CMS with this one-liner

java -jar mesh-demo-0.6.xx.jar

For the current example we are going to use the read-only user credentials (webclient:webclient). If you want to play around with the demo data you can point your browser to http://localhost:8080/mesh-ui/ to reach the Gentics Mesh user interface and use one of the available demo credentials to login.

4. Start - The application and browse the product catalogue

You can start the Vert.x web server by running Server.java.

That’s it - now you can access the product catalogue website in your browser: http://localhost:3000

Why Vert.x is a good fit for Gentics Mesh

Before digging into the example, let me share a few thoughts on Vert.x and Gentics Mesh in combination. In this example Vert.x is part of the frontend stack in delivering the product catalogue website. But it might also be of interest to you that Vert.x is also used at the very heart of Gentics Mesh itself. The Gentics Mesh REST API endpoints are built on top of Vert.x as a core component.

The great thing about Vert.x is that there are a lot of default implementations for various tasks such as authentication, database integration, monitoring and clustering. It is possible to use one or more features and omit the rest and thus your application remains lightweight.

Curious about the code?

Source: https://github.com/gentics/mesh-vertx-example

Now that everything is up and running let’s have a detailed look at the code.

A typical deployment unit for Vert.x is a verticle. In our case we use the verticle to bundle our code and run the web server within it. Once deployed, Vert.x will run the verticle and start the HTTP server code.

The Gentics Mesh REST client is used to communicate with the Gentics Mesh server. The Vert.x web library is used to set up our HTTP Router. As with other routing frameworks like Silex and Express, the router can be used to create routes for inbound HTTP requests. In our case we only need two routes. The main route which accepts the request will utilize the Gentics Mesh Webroot API Endpoint which is able to resolve content by a provided path. It will examine the response and add fields to the routing context.

The other route is chained and will take the previously prepared routing context and render the desired template using the handlebars template handler.

First we can handle various special requests path such as “/“ for the welcome page. Or the typical favicon.ico request. Other requests are passed to the Webroot API handler method.

Once the path has been resolved to a WebRootResponse we can examine that data and determine whether it is a binary response or a JSON response. Binary responses may occur if the requested resource represents an image or any other binary data. Resolved binary contents are directly passed through to the client and the handlebars route is not invoked.

Examples

JSON responses on the other hand are examined to determine the type of node which was located. A typical node response contains information about the schema used by the node. This will effectively determine the type of the located content (e.g.: category, vehicle).

The demo application serves different pages which correspond to the identified type. Take a look at the template sources within src/main/resources/templates/ if you are interested in the handlebars syntax. The templates in the example should cover most common cases.

The Mesh REST Client library internally makes use of the Vert.x core HTTP client.

RxJava is being used to handle these async requests. This way we can combine all asynchronously requested Gentics Mesh resources (breadcrumb, list of products) and add the loaded data into the routing context.

The Vert.x example server loads JSON content from the Gentics Mesh server. The JsonObject is placed in the handlebars render context and the template can access all nested fields within.

This way it is possible to resolve any field within the handlebars template.

That’s it! Finally, we can invoke mvn clean package in order to package our webserver. The maven-shade-plugin will bundle everything and create an executable jar.

What’s next?

Future releases of Gentics Mesh will refine the Mesh REST Client API and provide a GraphQL which will reduce the JSON overhead. Using GraphQL will also reduce the amount of requests which need to be issued.

Thanks for reading. If you have any futher questions or feedback don’t hesitate to send me a tweet to @Jotschi or @genticsmesh.


Scala is here

$
0
0

TL;DR

  • Scala support for Vert.x is here!
  • It is based on Scala 2.12, no support for 2.11 planned
  • all Vert.x-modules are available in a Scala flavor
  • It’s awesome
  • Get started here

Intro

The rise of Scala as one of the most important languages on the JVM caught many (me included) by surprise. This hybrid of functional and imperative paradigms struck a chord with many developers. Thanks to Scala a lot of people who’d never have touched a language like Haskell got exposed to functional programming. This exposure was one of the driving forces to get streams and lambda into the JVM.

With the release of Vert.x 3.4.0 we finally introduced Scala to the family of supported languages: vertx-lang-scala.

In this post I will introduce the new stack and how the power of Scala can be used in your favorite reactive toolkit.

Basics

vertx-lang-scala is based on Scala 2.12. There are no plans to support 2.11.

All modules available for Vert.x are supported (you can check here.

Modules use the following naming-scheme: io.vertx:-scala_2.12:. The Scala version of io.vertx:vert-web:3.4.0 would be io.vertx:vertx-web-scala_2.12:3.4.0.

There is an sbt-based quickstart-project available that will be updated for each Vert.x-release.

Please note: Although sbt is used in this quickstart it is by no means required. There are no special plugins involved so vertx-lang-scala can easily be used with Gradle or Maven.

I use sbt as it is the default build system used for Scala projects.

Quickstart

Let’s get started by cloning the quickstart:

git clone git@github.com:vert-x3/vertx-sbt-starter.git

You just got the following things:

  • An sbt project containing dependencies to Vert.x-core and Vert.x-web
  • The ability to create a fat-jat via sbt assembly
  • The ability to create a docker container via sbt docker
  • A few example verticles
  • Unit test examples
  • a pre-configured Scala-shell inside sbt

We will now run the application to get some quick satisfaction. Use sbt assembly to produce the fat-jar followed by java -jar target/scala-2.12/vertx-scala-sbt-assembly-0.1-SNAPSHOT.jar. Now point your browser to http://localhost:8666/hello for a classic welcome message.

The details

Open your IDE so we can take a look at what’s going on under the hood. We start with the HttpVerticle.

package io.vertx.scala.sbt

import io.vertx.lang.scala.ScalaVerticleimport io.vertx.scala.ext.web.Routerimport scala.concurrent.FutureclassHttpVerticleextendsScalaVerticle {// <1>overridedefstartFuture(): Future[Unit] = { // <2>valrouter =Router.router(vertx) // <3>valroute = router
      .get("/hello")
        .handler(_.response().end("world"))

    vertx //<4>
      .createHttpServer()
      .requestHandler(router.accept)
      .listenFuture(8666, "0.0.0.0")  // <5>
        .map(_ => ()) // <6>
  }
}
  1. ScalaVerticle is the base class for all Scala-Verticles. It provides all required methods to integrate with the Vert.x-runtime.
  2. There are two ways to start a Verticle. Overriding startFuture, like in this example, tells Vert.x to only consider the Verticle fully started after the returned Future[Unit] has been successfully completed. Alternatively one can override start and by that signal to Vert.x the instant availability of the Verticle.
  3. This block creates a Router for incoming HTTP-requests. It registers a handler to answer with “world” if a request to the URL “/hello” arrives. The class is coming from the Vert.x-web-module.
  4. Every Verticle has access to the Vert.x-instance. Here we use it to create a webserver and register our router to handle incoming requests.
  5. We finally reached the reason why I use startFuture in the first place. All operations in Vert.x are asynchronous. So starting the webserver most definitely means it takes some more time until it bound to the given port (8666 in this case). That’s why listenFuture is used, which returns a Future which in turn contains the actual instance of the webserver that just got started. So our Verticle will be ready to receive requests after the returned Future has been completed.
  6. In most cases we can return the Future directly. In this case the Future returned by listenFuture has the wrong type. We get a Future[HttpServer] but we need a Future[Unit] as you can see in the signature of startFuture. This call takes care of mapping the given Future[HttpServer] to the required return type.

Testing

I use ScalaTest for all my testing needs. It comes with stellar support for asynchronous operations and is a perfect fit for testing Vert.x-applications.

The following HttpVerticleSpec shows how to test an HTTP-API using only Vert.x-classes. Personally I prefer REST-assured with its rich DSL. For this post I wanted to stick with Vert.x-API, so here we go.

package io.vertx.scala.sbt

import org.scalatest.Matchersimport scala.concurrent.PromiseclassHttpVerticleSpecextendsVerticleTesting[HttpVerticle] withMatchers { // <1>"HttpVerticle" should "bind to 8666 and answer with 'world'" in { // <2>valpromise =Promise[String] // <3>

    vertx.createHttpClient()  // <4>
      .getNow(8666, "127.0.0.1", "/hello",
        r => {
          r.exceptionHandler(promise.failure)
          r.bodyHandler(b => promise.success(b.toString))
        })

    promise.future.map(res => res should equal("world")) // <5>
  }

}
  1. VerticleTesting is a base class for your tests included with the quickstart-project. It’s a small helper that takes care of deploying/un-deploying the Verticle to be tested and manages a Vert.x-instance. It additionally extends AsyncFlatSpec so we can use Futures as test-return-types.
  2. Isn’t it nice and readable?
  3. The promise is required as the whole test will run async
  4. We use the vertx-instance provided by VerticleTesting to create a Netty-based HttpClient. We instruct the client to call the specified URL and to succeed the Promise with the returned body.
  5. This creates the actual assertion. After getting the Future from the Promise an assertion is created: The Result should be equal to the String “world”. ScalaTest takes care of evaluating the returned Future.

That’s all you need to get started!

Futures in vertx-lang-scala

Now for a more in depth topic I think is worth mentioning. vertx-lang-scala treats async operations the Scala-way which is a little different from what you might be used from Vert.x. For async operations like subscribing to the eventbus or deploying a Verticle you would call a method like this:

vertx.deployVerticle("com.foo.OtherVerticle", res -> {
  if (res.succeeded()) {
    startFuture.complete();
  } else {
    startFuture.fail(res.cause());
  }
});

The deployVerticle method takes the Verticle-name and a Handler[AsyncResult] as its arguments. The Handler[AsyncResult] is called after Vert.x tried deploying the Verticle. This style can also be used for Scala (which might ease the transition when coming from the Java-world) but their is a way more scalaish way of doing this.

For every method taking a Handler[AsyncResult] as its argument I create an alternative method using Scala-Futures.

vertx.deployVerticleFuture("com.foo.OtherVerticle") // <1>
  .onComplete{  // <2>caseSuccess(s) => println(s"Verticle id is: $s") // <3>caseFailure(t) => t.printStackTrace()
  }
  1. A method providing a Future based alternative gets Future appended to its name and returns a Future instead of taking a Handler as its argument.
  2. We are now free to use Future the way we want. In this case onComplete is used to react on the completion.
  3. Pattern matching on the result <3

I strongly recommend using this approach over using Handlers as you won’t run into Callback-hell and you get all the goodies Scala provides for async operations.

Future and Promise both need a ExecutionContext
The VertxExecutionContext is made implicitly available inside the ScalaVerticle. It makes sure all operations are executed on the correct Event Loop. If you are using Vert.x without Verticles you have to provide it on your own.

Using the console

A great feature of sbt is the embedded, configurable Scala-console. The console available in the quickstart-project is pre-configured to provide a fresh Vert.x-instance and all required imports so you can start playing around with Vert.x in an instant.

Execute the following commands in the project-folder to deploy the HttpVerticle:

sbt
> console
scala> vertx.deployVerticle(nameForVerticle[HttpVerticle])
scala> vertx.deploymentIDs

After executing this sequence you can now point your browser http://localhost:8666/hello to see our message. The last command issued shows the Ids under which Verticles have been deployed.

To get rid of the deployment you can now type vertx.undeploy(vertx.deploymentIDs.head).

That’s it!

This was a very quick introduction to our new Scala-stack. I hope to have given you a little taste of the Scala goodness now available with Vert.x. I recommend digging a little more through the quickstart to get a feeling for what’s there. In my next blog post I will explain some of the decisions I made and the obstacles I faced with the differences between Java and Scala /Hint: They are way bigger than I was aware of).

Enjoy!

Dynamic Routing in Serverless Microservice with Vert.x Event Bus

$
0
0

this is a re-publication of the following blog post

SERVERLESS FRAMEWORK

The Serverless Framework has become the De Facto toolkit for building and deploying Serverless functions or applications. Its community has done a great job advancing the tools around Serverless architecture.

However, in the Serverless community there is debate among developers on whether a single AWS Lambda function should only be responsible for a single API endpoint. My answer, based on my real-world production experience, is NO.

Imagine if you are building a set of APIs with 10 endpoints and you need to deploy the APIs to DEV, STAGE and PROD environments. Now you are looking at 30 different functions to version, deploy and manage - not to mention the Copy & Paste code and configuration that will result from this type of set-up. NO THANKS!!!

I believe a more pragmatic approach is 1 Lambda Function == 1 Microservice.

For example, if you were building a User Microservice with basic CRUD functionality, you should implement CREATE, READ, UPDATE and DELETE in a single Lambda function. In the code, you should resolve the desired action by inspecting the request or the context.

VERT.X TO THE RESCUE

There are many benefits to using Vert.x in any application. With Vert.x, you get a rock-solid and lightweight toolkit for building reactive, highly performant, event-driven and non-blocking applications. The toolkit even provides asynchronous APIs for accessing traditional blocking drivers such as JDBC.

However, for this example, we will mainly focus on the Event Bus. The event bus allows different parts of your application to communicate with each other via event messages. It supports publish/subscribe, point to point, and request-response messaging.

For the User Microservice example above, we could treat the combination of the HTTP METHOD and RESOURCE PATH as a unique event channel, and register the subscribers/handlers to respond appropriately.

Let’s dive right in.

GOAL:

Create a reactive, message-driven, asynchronousUser Microservice with GET, POST, DELETE, PUT CRUD operations in a singleAWS Lambda Function using the Serverless Framework

Serverless stack definition:

SOLUTION:

Use Vert.x‘s Event Bus to handle dynamic routing to event handlers based on HTTP method and resource path from the API input.

Lambda Handler:

CODE REVIEW

Lines 14-19 initializes the Vert.x instance. AWS Lambda will hold on to this instance for the life of the container/JVM. It is reused in subsequent requests.

Line 17 registers the User Servicehandlers

Line 22 defines the main handler method that is called when the Lambda function is invoked.

Line 27 sends the Lambda function input to the (dynamic) address where handlers are waiting to respond.

Lines 44-66 defines the specific handlers and binds them to the appropriate channels (http method + resource path)

SUMMARY

As you can see, Vert.x‘s Event Bus makes it very easy to dynamically support multiple routes in a single Serverless function. This reduces the number of functions you have to manage, deploy and maintain in AWS. In addition, you gain access to asynchronous, non-blocking APIs that come standard with Vert.x.

Serverless + Vert.x = BLISS

Building a real-time web app with Angular/Ngrx and Vert.x

$
0
0

Nowadays, there are multiple tech stacks to build a real-time web app. What are the best choices to build real-time Angular client apps, connected to a JVM-based backend? This article describes an Angular+Vertx real-time architecture with a Proof of Concept demo app.

this is a re-publication of the following Medium post

Intro

Welcome to the real-time web! It’s time to move on from traditional synchronous HTTP request/response architectures to reactive apps with connected clients (ouch… that’s a lot of buzzwords in just one sentence)!

Real-time app

Image source: https://www.voxxed.com

To build this kind of app, MeteorJS is the new cool kid on the block (v1.0 released in october 2014): a full stack Javascript platform to build connected-client reactive applications. It allows JS developers to build and deploy amazing modern web and mobile apps (iOS/Android) in no time, using a unified backend+frontend code within a single app repo. That’s a pretty ambitious approach but it requires a very opinionated and highly coupled JS tech stack and it’s still a pretty niche framework.

Moreover, we are a Java shop on the backend. At AgoraPulse, we rely heavily on :

  • Angular and Ionic for the JS frontend (with a shared business/data architecture based on Ngrx),
  • Groovy and Grails ecosystem for the JVM backend.

So my question is:

What are the best choices to build real-time Angular client apps, connected to a JVM-based backend these days?

Our requirements are pretty basic. We don’t need full Meteor’s end-to-end application model. We just want to be able to :

  1. build a reactive app with an event bus on the JVM, and
  2. extend the event bus down to the browser to be able to publish/subscribe to real-time events from an Angular app.

Server side (JVM)

Reactive apps is a hot topic nowadays and there are many great libs/platforms to build this type of event-driven architecture on the JVM:

Client side

ReactJS and Angular are the two most popular framework right now to build modern JS apps. Most platforms use SockJS to handle real-time connections:

  • Vertx-web provides a SockJS server implementation with an event bus bridge and a vertx-evenbus.js client library (very easy to use),
  • Spring provides websocket SockJS support though Spring Messaging and Websocket libs (see an example here)

Final choice: Vert.x + Angular

In the end, I’ve chosen to experiment with Vert.x for its excellent Groovy support, distributed event bus, scalability and ease of use.

I enjoyed it very much. Let me show you the result of my experimentation which is the root of our real-time features coming very soon in AgoraPulse v6.0!

Why Vert.x?

Like other reactive platform, Vert.x is event driven and non blocking. It scales very well (even more that Node.js).

Unlike other reactive platforms, Vert.x is polyglot: you can use Vert.x with multiple languages including Java, JavaScript, Groovy, Ruby, Ceylon, Scala and Kotlin.

Unlike Node.js, Vert.x is a general purpose tool-kit and unopinionated. It’s a versatile platform suitable for many things: from simple network utilities, sophisticated modern web applications, HTTP/REST microservices or a full blown back-end message-bus application.

Like other reactive platforms, it looks scary in the begining when you read the documentation… ;) But once you start playing with it, it remains fun and simple to use, especially with Groovy! Vert.x really allows you to build substantial systems without getting tangled in complexity.

In my case, I was mainly interested by the distributed event-bus provided (a core feature of Vert.x).

To validate our approach, we built prototypes with the following goals:

  • share and synchronize a common (Ngrx-based) state between multiple connected clients, and
  • distribute real-time (Ngrx-based) actions across multiple connected clients, which impact local states/reducers.

Note: @ngrx/store is a RxJS powered state management inspired by Redux for Angular apps. It’s currently the most popular way to structure complex business logic in Angular apps.

Redux

Source: https://www.smashingmagazine.com/2016/06/an-introduction-to-redux/

PROOF OF CONCEPT

Here is the repo of our initial proof of concept:

http://github.com/benorama/ngrx-realtime-app

The repo is divided into two separate projects:

  • Vert.x server app, based on Vert.x (version 3.3), managed by Gradle, with a main verticle developed in Groovy lang.
  • Angular client app, based on Angular (version 4.0.1), managed by Angular CLI with state, reducers and actions logic based on @ngrx/store (version 2.2.1)

For the demo, we are using the counter example code (actions and reducers) from @ngrx/store.

The counter client business logic is based on:

  • CounterState interface, counter state model,
  • counterReducer reducer, counter state management based on dispatched actions, and
  • Increment, decrement and reset counter actions.

State is maintained server-side with a simple singleton CounterService.

classCounterService {static INCREMENT = '[Counter] Increment'static DECREMENT = '[Counter] Decrement'static RESET = '[Counter] Reset'int total = 0void handleEvent(event) {
        switch(event.type) {
            caseINCREMENT:
                total++
                breakcaseDECREMENT:
                total--
                breakcaseRESET:
                total = 0break
        }
    }
}

Client state initialization through Request/Response

Initial state is initialized with simple request/response (or send/reply) on the event bus. Once the client is connected, it sends a request to the event bus at the address counter::total. The server replies directly with the value of CounterService total and the client dispatches locally a reset action with the total value from the reply.

Vertx Request Response

Source: https://www.slideshare.net/RedHatDevelopers/vertx-microservices-were-never-so-easy-clement-escoffier

Here is an extract of the corresponding code (from AppEventBusService):

initializeCounter() {
    this.eventBusService.send('counter::total', body, (error, message) => {
    // Handle replyif (message && message.body) {
            let localAction = new CounterActions.ResetAction();
            localAction.payload = message.body; // Total valuethis.store.dispatch(localAction);
        }
    });
}

Actions distribution through Publish/Subscribe

Action distribution/sync uses the publish/subscribe pattern.

Counter actions are published from the client to the event bus at the address counter::actions.

Any client that have subscribed to counter::actions address will receive the actions and redispatch them locally to impact app states/reducers.

Vertx Publish Subscribe

Source: https://www.slideshare.net/RedHatDevelopers/vertx-microservices-were-never-so-easy-clement-escoffier

Here is an extract of the corresponding code (from AppEventBusService):

publishAction(action: RemoteAction) {
    if (action.publishedByUser) {
        console.error("This action has already been published");
        return;
    }
    action.publishedByUser = this.currentUser;
    this.eventBusService.publish(action.eventBusAddress, action);
}
subscribeToActions(eventBusAddress: string) {
    this.eventBusService.registerHandler(eventBusAddress, (error, message) => {
        // Handle message from subscriptionif (message.body.publishedByUser === this.currentUser) {
            // Ignore action sent by current managerreturn;
        }
        let localAction = message.body;
        this.store.dispatch(localAction);
    });
}

The event bus publishing logic is achieved through a simple Ngrx Effects. Any actions that extend RemoteAction class will be published to the event bus.

@Injectable()
exportclass AppEventBusEffects {

    constructor(private actions$: Actions, private appEventBusService: AppEventBusService) {}
    // Listen to all actions and publish remote actions to account event bus
    @Effect({dispatch: false}) remoteAction$ = this.actions$
        .filter(action => action instanceof RemoteAction && action.publishedByUser == undefined)
        .do((action: RemoteAction) => {
            this.appEventBusService.publishAction(action);
        });

    @Effect({dispatch: false}) login$ = this.actions$
        .ofType(UserActionTypes.LOGIN)
        .do(() => {
            this.appEventBusService.connect();
        });
}

You can see all of this in action by locally launching the server and the client app in two separate browser windows.

Demo app screen

Bonus: the demo app also includes user status (offline/online), based of the event bus connection status.

The counter state is shared and synchronized between connected clients and each local action is distributed in real-time to other clients.

Mission accomplished!

Typescript version of Vertx EventBus Client
The app uses our own Typescript version of the official JS Vertx EventBus Client. It can be found here, any feedback, improvement suggestions are welcome!

Time scheduling with Chime

$
0
0

Time scheduling.

Eclipse Vert.x executes periodic and delayed actions with periodic and one-shot timers. This is the base for time scheduling and reach feature extension must be rather interesting. Be notified at certain date / time, take into account holidays, repeat notifications until a given date, apply time zone, take into account daylight saving time etc. There are a lot of useful features time scheduler may introduce to the Vert.x stack.

Chime.

Chime is time scheduler verticle which works on Vert.x event bus and provides:

  • scheduling with cron-style, interval or union timers:
    • at a certain time of day (to the second);
    • on certain days of the week, month or year;
    • with a given time interval;
    • with nearly any combination of all of above;
    • repeating a given number of times;
    • repeating until a given time / date;
    • repeating infinitely
  • proxying event bus with conventional interfaces
  • applying time zones available on JVM with daylight saving time taken into account
  • flexible timers management system:
    • grouping timers;
    • defining a timer start or end times
    • pausing / resuming;
    • fire counting;
  • listening and sending messages via event bus with JSON;
  • publishing or sending timer fire event to the address of your choice.

Chime is written in Ceylon and is available at Ceylon Herd.

Running.

Ceylon users.

Deploy Chime using Verticle.deployVerticle method.

importio.vertx.ceylon.core {vertx}
importherd.schedule.chime {Chime}
Chime().deploy(vertx.vertx());

Or with vertx.deployVerticle(\"ceylon:herd.schedule.chime/0.2.1\"); but ensure that Ceylon verticle factory is available at class path.

Java users.

  1. Ensure that Ceylon verticle factory is available at class path.
  2. Put Ceylon versions to consistency. For instance, Vert.x 3.4.1 depends on Ceylon 1.3.0 while Chime 0.2.1 depends on Ceylon 1.3.2.
  3. Deploy verticle, like:
    vertx.deployVerticle("ceylon:herd.schedule.chime/0.2.1")

example with Maven is available at Github.

Schedulers.

Well, Chime verticle is deployed. Let’s see its structure.
In order to provide flexible and broad ways to manage timing two level architecture is adopted. It consists of schedulers and timers. Timer is a unit which fires at a given time. While scheduler is a set or group of timers and provides following:

  • creating and deleting timers;
  • pausing / resuming all timers working within the scheduler;
  • info on the running timers;
  • default time zone;
  • listening event bus at the given scheduler address for the requests to.

Any timer operates within some scheduler. And one or several schedulers have to be created before starting scheduling.
When Chime verticle is deployed it starts listen event bus at chime address (can be configured). In order to create scheduler send to this address a JSON message.

{
    "operation": "create",
    "name": "scheduler name"}

Once scheduler is created it starts listen event bus at scheduler name address. Sending messages to chime address or to scheduler name address are rather equivalent, excepting that chime address provides services for every scheduler, while scheduler address provides services for this particular scheduler only.
The request sent to the Chime has to contain operation and name keys. Name key provides scheduler or timer name. While operation key shows an action Chime has to perform. There are only four possible operations:

  • create - create new scheduler or timer;
  • delete - delete scheduler or timer;
  • info - request info on Chime or on a particular scheduler or timer;
  • state - set or get scheduler or timer state (running, paused or completed).

Timers.

Now we have scheduler created and timers can be run within. There are two ways to access a given timer:

  1. Sending message to chime address with ‘name’ field set to scheduler name:timer name.
  2. Sending message to scheduler name address with ‘name’ field set to either timer name or scheduler name:timer name.

Timer request is rather complicated and contains a lot of details. In this blog post only basic features are considered:

{
    "operation": "create",
    "name": "scheduler name:timer name",
    "description": {}
};

This is rather similar to request sent to create a scheduler. The difference is only description field is added. This description is an JSON object which identifies particular timer type and its details.
The other fields not shown here are optional and includes:

  • initial timer state (paused or running);
  • start or end date-time;
  • number of repeating times;
  • is timer message to be published or sent;
  • timer fire message and delivery options;
  • time zone.

Timer descriptions.

Currently, three types of timers are supported:

  • Interval timer which fires after each given time period (minimum 1 second):

    {
      "type": "interval",
      "delay": "timer delay in seconds, Integer"
    };
  • Cron style timer which is defined with cron-style:

    {
    “type”: “cron”,
    “seconds”: “seconds in cron style, String”,
    “minutes”: “minutes in cron style, String”,
    “hours”: “hours in cron style, String”,
    “days of month”: “days of month in cron style, String”,
    “months”: “months in cron style, String”,
    “days of week”: “days of week in cron style, String, optional”,
    “years”: “years in cron style, String, optional”
    };

    Cron timer is rather powerful and flexible. Investigate specification for the complete list of features.
  • Union timer which combines a number of timers into a one:

    {
    “type”: “union”,
    “timers”: [“list of the timer descriptions”]
    };

    Union timer may be useful to fire at a list of specific dates / times.

Timer events.

Once timer is started it sends or publishes messages to scheduler name:timer name address in JSON format. Two types of events are sent:

  • fire event which occurs when time reaches next timer value:
    {  
      "name": "scheduler name:timer name, String",  
      "event": "fire",  
      "count": "total number of fire times, Integer",  
      "time": "ISO formated time / date, String",  
      "seconds": "number of seconds since last minute, Integer",  
      "minutes": "number of minutes since last hour, Integer",  
      "hours": "hour of day, Integer",  
      "day of month": "day of month, Integer",  
      "month": "month, Integer",  
      "year": "year, Integer",  
      "time zone": "time zone the timer works in, String"
    };
  • complete event which occurs when timer is exhausted by some criteria given at timer create request:
    {  
      "name": "scheduler name:timer name, String",  
      "event": "complete",  
      "count": "total number of fire times, Integer"  
    };

Basically, now we know everything to be happy with Chime: schedulers and requests to them, timers and timer events. Will see some examples in the next section.

Examples.

Ceylon example.

Let’s consider a timer which has to fire every month at 16-30 last Sunday.

// listen the timer events
eventBus.consumer (
    "my scheduler:my timer",
    (Throwable|Message msg) {
        if (is Message msg) { print(msg.body()); }
        else { print(msg); }    
    }
);
// create scheduler and timer
eventBus.send (
    "chime",
    JsonObject {
        "operation" ->"create",
        "name" ->"my scheduler:my timer",
        "description" -> JsonObject {
            "type" ->"cron",
            "seconds" ->"0",
            "minutes" ->"30",
            "hours" ->"16",
            "days of month" ->"*",
            "months" ->"*",
            "days of week" ->"SundayL"
        }
    }
);

‘*’ means any, ‘SundayL’ means last Sunday.

If ‘create’ request is sent to Chime address with name set to ‘scheduler name:timer name’ and corresponding scheduler hasn’t been created before then Chime creates both new scheduler and new timer.

Java example.

Let’s consider a timer which has to fire every Monday at 8-30 and every Friday at 17-30.

// listen the timer events
MessageConsumer consumer = eventBus.consumer("my scheduler:my timer");
consumer.handler (
    message -> {
        System.out.println(message.body());
      }
);
// description of timers
JsonObject mondayTimer = (new JsonObject()).put("type", "cron")
    .put("seconds", "0").put("minutes", "30").put("hours", "8")
    .put("days of month", "*").put("months", "*")
    .put("days of week", "Monday");
JsonObject fridayTimer = (new JsonObject()).put("type", "cron")
    .put("seconds", "0").put("minutes", "30").put("hours", "17")
    .put("days of month", "*").put("months", "*")
    .put("days of week", "Friday");
// union timer - combines mondayTimer and fridayTimer
JsonArray combination = (new JsonArray()).add(mondayTimer)
    .add(fridayTimer);
JsonObject timer = (new JsonObject()).put("type", "union")
    .put("timers", combination);
// create scheduler and timer
eventBus.send (
    "chime",
    (new JsonObject()).put("operation", "create")
        .put("name", "my scheduler:my timer")
        .put("description", timer)
);

Ensure that Ceylon verticle factory with right version is available at class path.

At the end.

herd.schedule.chime module provides some features not mentioned here:

  • convenient builders useful to fill in JSON description of various timers;
  • proxying event bus with conventional interfaces;
  • reading JSON timer event into an object;
  • attaching JSON message to the timer fire event;
  • managing time zones.

There are also some ideas for the future:

  • custom or user-defined timers;
  • limiting the timer fire time / date with calendar;
  • extracting timer fire message from external source.

This is very quick introduction to the Chime and if you are interested in you may read more in Chime documentation or even contribute to.

Thank’s for the reading and enjoy with coding!

Presentation of the Vert.x-Swagger project

$
0
0

This post is an introduction to the Vert.x-Swagger project, and describe how to use the Swagger-Codegen plugin and the SwaggerRouter class.

Eclipse Vert.x & Swagger

Vert.x and Vert.x Web are very convenient to write REST API and especially the Router which is very useful to manage all resources of an API.

But when I start a new API, I usually use the “design-first” approach and Swagger is my best friend to define what my API is supposed to do. And then, comes the “boring” part of the job : convert the swagger file content into java code. That’s always the same : resources, operations, models…

Fortunately, Swagger provides a codegen tool : Swagger-Codegen. With this tool, you can generate a server stub based on your swagger definition file. However, even if this generator provides many different languages and framework, Vert.X is missing.

This is where the Vert.x-Swagger project comes in.

The project

Vert.x-Swagger is a maven project providing 2 modules.

vertx-swagger-codegen

It’s a Swagger-Codegen plugin, which add the capability of generating a Java Vert.x WebServer to the generator.

The generated server mainly contains :

  • POJOs for definitions
  • one Verticle per tag
  • one MainVerticle, which manage others APIVerticle and start an HttpServer.

the MainVerticle use vertx-swagger-router

vertx-swagger-router

The main class of this module is SwaggerRouter. It’s more or less a Factory (and maybe I should rename the class) that can create a Router, using the swagger definition file to configure all the routes. For each route, it extracts parameters from the request (Query, Path, Header, Body, Form) and send them on the eventBus, using either the operationId as the address or a computed id (just a parameter in the constructor).

Let see how it works

For this post, I will use a simplified swagger file but you can find a more complex example here based on the petstore swagger file

Generating the server

First, choose your swagger definition. Here’s a YAML File, but it could be a JSON file :

Then, download these libraries :

Finally, run this command

java -cp /path/to/swagger-codegen-cli-2.2.2.jar:/path/to/vertx-swagger-codegen-1.0.0.jar io.swagger.codegen.SwaggerCodegen generate \
  -l java-vertx \
  -o path/to/destination/folder \
  -i path/to/swagger/definition \
  --group-id your.group.id \
  --artifact-id your.artifact.id

For more Information about how SwaggerCodegen works
you can read this https://github.com/swagger-api/swagger-codegen#getting-started

You should have something like that in your console:

[main] INFO io.swagger.parser.Swagger20Parser - reading from ./wineCellarSwagger.yaml
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/model/Bottle.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/model/CellarInformation.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/BottlesApi.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/BottlesApiVerticle.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/InformationApi.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/InformationApiVerticle.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/resources/swagger.json
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/MainApiVerticle.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/resources/vertx-default-jul-logging.properties
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/pom.xml
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/README.md
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/.swagger-codegen-ignore
And this in your destination folder:

Generated sources

What have been created ?

As you can see in 1, the vertx-swagger-codegen plugin has created one POJO by definition in the swagger file.

Example : the bottle definition

In 2a and 2b you can find :

  • an interface which contains a function per operation
  • a verticle which defines all operationId and create EventBus consumers

Example : the Bottles interface

Example : the Bottles verticle

… and now ?

Line 23 of BottlesApiVerticle.java, you can see this

BottlesApi service = new BottlesApiImpl();
This line will not compile until the BottlesApiImpl class is created.

In all XXXAPIVerticles, you will find a variable called service. It is a XXXAPI type and it is instanciated with a XXXAPIImpl contructor. This class does not exist yet since it is the business of your API.

And so you will have to create these implementations.

Fine, but what if I don’t want to build my API like this ?

Well, Vert.x is unopinionated but the way the vertx-swagger-codegen creates the server stub is not. So if you want to implement your API the way you want, while enjoying dynamic routing based on a swagger file, the vertx-swagger-router library can be used standalone.

Just import this jar into your project :

You will be able to create your Router like this :

FileSystem vertxFileSystem = vertx.fileSystem();
vertxFileSystem.readFile(YOUR_SWAGGER_FILE, readFile -> {
    if (readFile.succeeded()) {
        Swagger swagger = new SwaggerParser().parse(readFile.result().toString(Charset.forName(“utf-8”))); 
        Router swaggerRouter = SwaggerRouter.swaggerRouter(Router.router(vertx), swagger, vertx.eventBus(), new OperationIdServiceIdResolver());
        […]
   } else {
        […]
   }
});
You can ignore the last parameter in SwaggerRouter.swaggerRouter(...). As a result, addresses will be computed instead of using operationId from the swagger file. For instance, GET /bottles/{bottle_id} will become GET_bottles_bottle-id

Conclusion

Vert.x and Swagger are great tools to build and document an API but using both in the same project can be painful. The Vert.x-Swagger project was made to save time, letting the developers focusing on business code. It can be seen as an API framework over Vert.X.

You can also use the SwaggerRouter in your own project without using Swagger-Codegen.

In future releases, more information from the swagger file will be used to configure the router and certainly others languages will be supported.

Though Vert.x is polyglot, Vert.x-Swagger project only supports Java. If you want to contribute to support more languages, you’re welcome :)

Thanks for reading.

Preview of a guide for Java developers

$
0
0

I could not attend the last Eclipse Vert.x community face-to-face meeting last fall, but one item that was discussed is the need for guides aimed at certain types of developers. One of my missions as part of joining the team was to work on this and I’m very happy to share it with you today!

A gentle guide to asynchronous programming with Eclipse Vert.x for enterprise application developers

The guide is called “A gentle guide to asynchronous programming with Eclipse Vert.x for enterprise application developers” and it is an introduction to asynchronous programming with Vert.x, primarily aimed at developers familiar with mainstream non-asynchronous web development frameworks and libraries (e.g., Java EE, Spring).

Quoting the introduction:

We will start from a wiki web application backed by a relational database and server-side rendering of pages; then we will evolve the application through several steps until it becomes a modern single-page application with “real-time” web features. Along the way you will learn to:

  1. Design a web application with server-side rendering of pages through templates, and using a relational database for persisting data.
  2. Cleanly isolate each technical component as a reusable event processing unit called a verticle.
  3. Extract Vert.x services for facilitating the design of verticles that communicate with each other seamlessly both within the same JVM process or among distributed nodes in a cluster.
  4. Testing code with asynchronous operations.
  5. Integrating with third-party services exposing a HTTP/JSON web API.
  6. Exposing a HTTP/JSON web API.
  7. Securing and controlling access using HTTPS, user authentication for web browser sessions and JWT tokens for third-party client applications.
  8. Refactoring some code to use reactive programming with the popular RxJava library and its Vert.x integration.
  9. Client-side programming of a single-page application with AngularJS.
  10. Real-time web programming using the unified Vert.x event bus integration over SockJS.

The guide takes a gradual approach by starting with a “quick and dirty” solution, then refactoring it properly, exposing the core Vert.x concepts, adding features, and moving from callbacks to RxJava.

We need your feedback!

The code is available at https://github.com/vert-x3/vertx-guide-for-java-devs. You can report feedback as Github issues to that repository and even offer pull-requests.

You can check it out from GitHub (the AsciiDoc is being rendered fine from the repository interface) or you can check out pre-rendered HTML and PDF versions that I am temporarily sharing and keeping up-to-date from my Dropbox: https://www.dropbox.com/sh/ni9znfkzlkl3q12/AABn-OCi1CZfgbTzOU0jYQpJa?dl=0

Many thanks to Thomas Segismont and Julien Viet who contributed some parts, and also to the people who reviewed it privately.

As usual, we welcome your feedback!

OpenAPI (fka Swagger) 3 support in Eclipse Vert.x now in test stage!

$
0
0

As GSoC 2017’s student, I’m actually working on an embedded support to OpenAPI 3 standard inside Eclipse Vert.x framework. Now, after a lot of work, you can try it!

Why OpenAPI 3?

OpenAPI 2 is the most important industry-grade standard for API Specifications. As you can see on official blog of OpenAPI Initiative, the release of version 3 is behind the corner, so we want to give to our community the latest tools for the latest standards!

Vert.x project objective is to give you more integrated tools. With this new support, it gives you the ability to use the Design Driven (or Design First) approach without loading any thirds parties libraries.

Features

The actually supported features are the following (we refeer to document Version 3.0.0-rc2):

  • OpenAPI 3 compliant API specification validation (thanks to Kaizen-OpenApi-Parser) with loading of external Json schemas
  • Automatic request validation
  • Automatic mount of security validation handlers
  • Automatic 501 response for not implemented operations
  • Router factory to provide all this features to users

Automatic request validation is provided by a new handler: ValidationHandler. You can also define your own ValidationHandler without API specifications, but I will discuss it later.

The request validation (provided by subclass OpenAPI3RequestValidationHandler) actually supports:

  • Parameters defined in Parameter object. We support every type of parameter, including object and array. We also support every type description field (for example format, minimum, maximum, etc). Also, at the moment, we support every combination of style and explode field (excluded styles matrix and label)
  • Body defined in new RequestBody object. In particular:
    • For application/json the validation handler will take schema that you have defined in schema object and will validate json bodies with it
    • For application/x-www-form-urlencoded and multipart/form-data the validation handler will take care of validate every parameters in form attributes. It actually supports only comma separated values for object and arrays
    • For other parameter types it will check Content-Type header

Request validation errors will be carried with RoutingContext encapsulated in an object called ValidationHandler, so you have to attach failure handler to check if something went wrong during validation. Also the RoutingContext carry a new object called RequestParameters that encapsulate all request parameters deserialized and parsed.

Router factory is intended to give you a really simple user interface to use OpenAPI 3 support. Most important features are:

  • Async loading of specification and its schema dependencies
  • Automatic convert OpenAPI style paths to Vert.x style paths
  • Lazy methods: operations (combination of paths and HTTP methods) are mounted in definition order inside specification
  • Automatic mount of security validation handlers

Also, it’s planned to release a project skeleton generator based on API spec.

Startup your project

We are in a testing stage, so the vertx-web official repo doesn’t contain it. To include the modified version of vertx-web replace your vertx-web maven dependency with this one:

<dependency><groupId>com.github.slinkydevelopergroupId><artifactId>vertx-webartifactId><version>89d6254d50version>dependency>
Then you have to add this maven repository in your `pom.xml`
<repositories><repository><id>jitpack.ioid><url>https://jitpack.iourl>repository>repositories>

You can also use it with gradle

Now you can start using OpenAPI 3 inside your Vert.x powered app!

First of all you need to load the specification and construct the router factory:

// Load the api spec. This operation is asynchronous
OpenAPI3RouterFactory.createRouterFactoryFromFile(this.vertx, "src/main/resources/petstore.yaml", ar -> {
    if (ar.succeeded()) {
        // Spec loaded with success
        OpenAPI3RouterFactory routerFactory = ar.result();
    } else {
        // Something went wrong during router factory initialization
        Throwable exception = ar.cause();
        logger.error("Ops!", exception);
    }
});

Handlers mounting

Now load your first path. There are two functions to load the handlers:

  • addHandler(HttpMethod method, String path, Handler handler, Handler failureHandler)
  • addHandlerByOperationId(String operationId, Handler handler, Handler failureHandler)

This two functions wants an handler and a failure handler. You can, of course, add multiple handlers to same operation, without overwrite the existing ones.

Add operations with operationId
Usage of combination of path and HTTP method is allowed, but it’s better to add operations handlers with operationId, for performance reasons and to avoid paths nomenclature errors

This is an example of addHandlerByOperationId():

// Add an handler with operationId
routerFactory.addHandlerByOperationId("listPets", routingContext -> {
    // Handle listPets operation (GET /pets)
}, routingContext -> {
    // Handle failure
});

This is an example of addHandler:

// Add an handler with a combination of HttpMethod and path
routerFactory.addHandler(HttpMethod.POST, "/pets", routingContext -> {
    // Handle /pets POST operation
}, routingContext -> {
    // Handle failure
});

Request parameters

Now you can freely use request parameters. To get the RequestParameters object:

RequestParameters params = routingContext.get("parsedParameters");

The RequestParameters object provides all methods to access to query, cookie, header, path, form and entire body parameters. Here are some examples of how to use this object.

Parameter with name awesomeParameter with type integer in query:

RequestParameter awesomeParameter = params.queryParameter("awesomeParameter");
if (awesomeParameter != null) {
    // awesomeParameter parameter exists, but we are not sure that is empty or not (query parameters can be empty with allowEmptyValue: true)if (!awesomeParameter.isEmpty()) {
      // Now we are sure that it exists and it's not empty, so we can extract it
      Integer awesome = awesomeParameter.getInteger();
    } else {
      // Parameter exists, but it's empty value
    }
} else {
    // Parameter doesn't exist (it's not required)
}

As you can see, every parameter is mapped in respective objects (integer in Integer, integer with format: int64 in Long, float in Float and so on)

Comma separated array with name awesomeParameters with type integer in query:

RequestParameter awesomeParameters = params.queryParameter("awesomeParameters");
if (awesomeParameters != null&& !awesomeParameters.isEmpty()) {
    List awesomeList = awesomeParameters.getArray();
    for (RequestParameter awesome : awesomeList) {
      Integer a = awesome.getInteger();
    }
} else {
  // awesomeParameters not found or empty string
}

JSON Body:

RequestParameter body = params.body();
if (body != null)
  JsonObject jsonBody = body.getJsonObject();

Security handling

You can mount only one security handler for a combination of schema and scope.

To add a security handler only with a schema name:

routerFactory.addSecurityHandler("security_scheme_name", routingContext -> {
    // Handle security here and then call next()
    routingContext.next();
});

To add a security handler with a combination of schema name and scope:

routerFactory.addSecuritySchemaScopeValidator("security_scheme_name", "scope_name", routingContext -> {
    // Handle security here and then call next()
    routingContext.next();
});

You can define security handlers where you want but define it!
During Router instantiation, if factory finds a path that require a security schema without an assigned handler, It will throw a RouterFactoryException

Error handling

Every time you add an handler for an operation you can add a failure handler. To handle a ValidationException:

Throwable failure = routingContext.failure();
if (failure instanceof ValidationException)
    // Handle Validation Exception
    routingContext.response().setStatusCode(400).setStatusMessage("ValidationError").end(failure.getMessage());

Also the router factory provides two other tools:

  • It automatically mounts a 501 Not Implemented handler for operations where you haven’t mounted any handler
  • It can load a default ValidationException failure handler (You can enable this feature via routerFactory.enableValidationFailureHandler(true))

And now use it!

Now you are ready to generate the Router!

Router router = routerFactory.getRouter();

// Now you can use your Router instance
HttpServer server = vertx.createHttpServer(new HttpServerOptions().setPort(8080).setHost("localhost"));
server.requestHandler(router::accept).listen();

Lazy methods!
getRouter() generate the Router object, so you don’t have to care about code definition order

And now?

You can find a complete example here: OpenAPI 3 Vert.x example gists

You can access to documentation (WIP) here (for others languages, check out here), but you can also check Javadoc inside code. These are the most important ones:

Follow my fork of vertx-web project to get last updates.

We want you!
Please give us your feedback opening an issue here

Vert.x 3.5.0.Beta1

$
0
0

it’s summer time and we have just released Vert.x 3.5.0.Beta1!

Let’s go RxJava2

First and foremost this release delivers the RxJava2 API with support of its full range of types.

In addition of Single, Rxified APIs expose also theCompletable and Maybe types

// expose Handler<AsyncResult<Void>>Completable completable = server.rxClose();

completable.subscribe(() ->System.out.println("closed"));

// expose Handler<AsyncResult<String>> where the result can be null
Maybe<String> ipAddress = dnsClient.rxLookup("www.google.com");
ipAddress.subscribe(
  value ->System.out.println("resolved to " + value),
  err -> err.printStackTrace(),
  () ->System.out.println("does not resolve"));

RxJava augments Vert.x streams with a toObservable() method, RxJava2 adds the toFlowable() method:

// Flowable maps to a ReadStream<Buffer>// back-pressured stream
Flowable flowable = asyncFile.toFlowable();

// but we still can get an Observable<Buffer>// non back-pressured stream
Observable flowable = asyncFile.toObservable();

What’s so different between Flowable and Observable ? the former handles back-pressure, i.e the subscriber can control the flow of items and the later can not!!!

You can read the documentation in the beta section of the docs or go straight to the examples

MQTT Client

In Vert.x 3.4 we added the MQTT server, 3.5 completes the MQTT story with the MQTT client:

MqttClient mqttClient = MqttClient.create(vertx,
   new MqttClientOptions()
     .setPort(BROKER_PORT)
     .setHost(BROKER_HOST)).connect(ar ->if (ar.succeeded()) {
    System.out.println("Connected to a server");

    mqttClient.publish(
      MQTT_TOPIC,
      Buffer.buffer(MQTT_MESSAGE),
      MqttQoS.AT_MOST_ONCE,
      false,
      false,
      s -> mqttClient.disconnect(d -> System.out.println("Disconnected from server")));
  } else {
    System.out.println("Failed to connect to a server");
    ar.cause().printStackTrace();
  }
});

You can find MQTT client and server examples here

Auth handler chaining

There are times when you want to support multiple authN/authZ mechanisms in a single application.

Vert.x Web supports auth handlers chaining

ChainAuthHandler chain = ChainAuthHandler.create();

// add http basic auth handler to the chain
chain.append(BasicAuthHandler.create(provider));

// add form redirect auth handler to the chain
chain.append(RedirectAuthHandler.create(provider));

// secure your route
router.route("/secure/resource").handler(chain);

// your app
router.route("/secure/resource").handler(ctx -> {
  // do something...
});

Finally

this beta also provides

  • Vert.x Config stores for Vault and Consul
  • Upgrade to Hazelcast 3.8.2

Use it!

You can use and consume it in your projects from Maven or Gradle as usual with the version 3.5.0.Beta1 or read

You can also download various binaries from Maven Central:

as usual feedback is very important to us and one goal of this beta release is to let the community provide early feedback!

The final is expected at the beginning of October.

Enjoy

Introducing Vert.x MQTT client

$
0
0

In this article we will see how to sett up the new Vert.x MQTT client. Actually, I have a real example so you can try it quickly.

If you are using Maven or Gradle, add the following dependency to the dependencies section of your project descriptor to access the Vert.x MQTT client:

  • Maven (in your pom.xml):
<dependency><groupId>io.vertxgroupId><artifactId>vertx-mqttartifactId><version>3.5.0.Beta1version>dependency>
  • Gradle (in your build.gradle file):
dependencies {
  compile 'io.vertx:vertx-mqtt:3.5.0.Beta1'
}

Now that you’ve set up your project, you can create a simple application which will receive all messages from all broker channels:

import io.vertx.core.AbstractVerticle;
import io.vertx.mqtt.MqttClient;
import io.vertx.mqtt.MqttClientOptions;

import java.io.UnsupportedEncodingException;

publicclassMainVerticleextendsAbstractVerticle{

  @Overridepublicvoidstart(){
     MqttClientOptions options = new MqttClientOptions();
      // specify broker host
      options.setHost("iot.eclipse.org");
      // specify max size of message in bytes
      options.setMaxMessageSize(100_000_000);

    MqttClient client = MqttClient.create(vertx, options);

    client.publishHandler(s -> {
      try {
        String message = new String(s.payload().getBytes(), "UTF-8");
        System.out.println(String.format("Receive message with content: \"%s\" from topic \"%s\"", message, s.topicName()));
      } catch (UnsupportedEncodingException e) {
        e.printStackTrace();
      }
    });

    client.connect(s -> {
      // subscribe to all subtopics
      client.subscribe("#", 0);
    });
  }
}

The publishHandler is the handler called each time the broker, located at iot.eclipse.org:1883, sends a message to you, from the topics you are subscribing for.

But only providing a handler is not enough, you should also connect to the broker and subscribe to some topics. For this reason, you should use the connect method and then call subscribe when the connection established.

To deploy this verticle from an application you should have in your main method something like that:

Vertx vertx = Vertx.vertx();
vertx.deployVerticle(MainVerticle.class.getCanonicalName());

When you have completed all steps correctly the result should look like that:

As the alternative and recommended way to bootstrap Vert.x applications you can use vertx-maven-starter or vertx-gradle-starter. For completing this guide I have used the first one. The final source code available here. If you would like to learn more about Vert.x MQTT client API then check out the full documentation and more examples.

Thank you for reading!

Cheers!

An Eclipse Vert.x Gradle Plugin

$
0
0

Eclipse Vert.x is a versatile toolkit, and as such it does not have any strong opinion on the tools that you should be using.

Gradle is a popular build tool in the JVM ecosystem, and it is quite easy to use for building Vert.x project as show in one of the vertx-examples samples where a so-called fat Jar is being produced.

The new Vert.x Gradle plugin offers an opinionated plugin for building Vert.x applications with Gradle.

It automatically applies the following plugins:

  • java (and sets the source compatibility to Java 8),
  • application + shadow to generate fat Jars with all dependencies bundled,
  • nebula-dependency-recommender-plugin so that you can omit versions from modules from the the Vert.x stack.

The plugin automatically adds io.vertx:vertx-core as a compile dependency, so you don’t need to do it.

The plugin provides a vertxRun task that can take advantage of the Vert.x auto-reloading capabilities, so you can just run it then have your code being automatically compiled and reloaded as you make changes.

Getting started

A minimal build.gradle looks like:

plugins {
  id 'io.vertx.vertx-plugin' version '0.0.4'
}

repositories {
  jcenter()
}

vertx {
  mainVerticle = 'sample.App'
}

Provided sample.App is a Vert.x verticle, then:

  1. gradle shadowJar builds an executable Jar with all dependencies: java -jar build/libs/simple-project-fat.jar, and
  2. gradle vertxRun starts the application and automatically recompiles (gradle classes) and reloads the code when any file under src/ is being added, modified or deleted.

Using with Kotlin (or Groovy, or…)

The plugin integrates well with plugins that add configurations and tasks triggered by the classes task.

Here is how to use the plugin with Kotlin (replace the version numbers with the latest ones…):

plugins {
  id 'io.vertx.vertx-plugin' version 'x.y.z'
  id 'org.jetbrains.kotlin.jvm' version 'a.b.c'
}

repositories {
  jcenter()
}

dependencies {
  compile 'io.vertx:vertx-lang-kotlin'
  compile 'org.jetbrains.kotlin:kotlin-stdlib-jre8'
}

vertx {
  mainVerticle = "sample.MainVerticle"
}

tasks.withType(org.jetbrains.kotlin.gradle.tasks.KotlinCompile).all {
  kotlinOptions {
    jvmTarget = "1.8"
  }
}

Using with WebPack (or any other custom task)

WebPack is popular to bundle web assets, and there is even a guide for its integration with Gradle.

Mixing the Vert.x Gradle plugin with WebPack is very simple, especially in combination with the com.moowork.node plugin that integrates Node into Gradle.

Suppose we want to mix Vert.x code and JavaScript with Gradle and WebPack. We assume a package.json as:

{
  "name": "webpack-sample",
  "version": "0.0.1",
  "description": "A sample with Vert.x, Gradle and Webpack",
  "main": "src/main/webapp/index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\"&& exit 1"
  },
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "webpack": "^2.7.0"
  },
  "dependencies": {
    "axios": "^0.16.2"
  }
}

and webpack.config.js as:

module.exports = {
  entry: './src/main/webapp/index.js',
  output: {
    filename: './build/resources/main/webroot/bundle.js'
  }
}

The build.gradle file is the following:

plugins {
  id 'io.vertx.vertx-plugin' version '0.0.4'
  id 'com.moowork.node' version '1.2.0'
}

repositories {
  jcenter()
}

dependencies {
  compile "io.vertx:vertx-web"
}

vertx {
  mainVerticle = "sample.MainVerticle"
  watch = ["src/**/*", "build.gradle", "yarn.lock"]
  onRedeploy = ["classes", "webpack"]
}

task webpack(type: Exec) {
  inputs.file("$projectDir/yarn.lock")
  inputs.file("$projectDir/webpack.config.js")
  inputs.dir("$projectDir/src/main/webapp")
  outputs.dir("$buildDir/resources/main/webroot")
  commandLine "$projectDir/node_modules/.bin/webpack"
}

This custom build exposes a webpack task that invokes WebPack, with proper file tracking so that Gradle knows when the task is up-to-date or not.

The Node plugin adds many tasks, and integrates fine with npm or yarn, so fetching all NPM dependencies is done by calling ./gradlew yarn.

The vertxRun task now redeploys on modifications to files in src/ (and sub-folders), build.gradle and yarn.lock, calling both the classes and webpack tasks:

Summary

The Vert.x Gradle plugin provides lots of defaults to configure a Gradle project for Vert.x applications, producing fat Jars and offering a running task with automatic redeployment. The plugin also integrates well with other plugins and external tools for which a Gradle task is available.

The project is still in its early stages and we are looking forward to hearing from you!

Eclipse Vert.x 3.5.0 released !

$
0
0

The Vert.x team is pleased to announce the release of Eclipse Vert.x 3.5.0.

As usual it delivers an impressive number of high quality features.

Let’s go RxJava2

First and foremost this release delivers the RxJava2 API with support of its full range of types.

In addition of Single, Rxified APIs also expose theCompletable and Maybe types:

// expose Handler<AsyncResult<Void>>Completable completable = server.rxClose();

completable.subscribe(() ->System.out.println("closed"));

// expose Handler<AsyncResult<String>> where the result can be null
Maybe<String> ipAddress = dnsClient.rxLookup("www.google.com");
ipAddress.subscribe(
  value ->System.out.println("resolved to " + value),
  err -> err.printStackTrace(),
  () ->System.out.println("does not resolve"));

RxJava augments Vert.x streams with a toObservable() method, likewise RxJava2 adds the toFlowable() method:

// Flowable maps to a ReadStream<Buffer>// back-pressured stream
Flowable flowable = asyncFile.toFlowable();

// but we still can get an Observable<Buffer>// non back-pressured stream
Observable flowable = asyncFile.toObservable();

What’s so different between Flowable and Observable? the former handles back-pressure, i.e the subscriber can control the flow of items and the later can not!!!

You can read the documentation in the section of the docs or go straight to the examples

Kotlin coroutines

Support for Kotlin Coroutines is one of my favourite 3.5 features (by the way I’ll present a talk about Vert.x and coroutines at KotlinConf).

Coroutines allows you to reason about asynchronous flow the same way you do with traditional sequential flow with the extra bonus to use try/catch/finally super combo:

val movie = ctx.pathParam("id")
val rating = Integer.parseInt(ctx.queryParam("getRating")[0])
val connection = awaitResult<SQLConnection> { client.getConnection(it) }
try {
  val result = awaitResult<ResultSet> { connection.queryWithParams("SELECT TITLE FROM MOVIE WHERE ID=?", json { array(movie) }, it) }
  if (result.rows.size == 1) {
    awaitResult<UpdateResult> { connection.updateWithParams("INSERT INTO RATING (VALUE, MOVIE_ID) VALUES ?, ?", json { array(rating, movie) }, it) }
    ctx.response().setStatusCode(200).end()
  } else {
    ctx.response().setStatusCode(404).end()
  }
} finally {
  connection.close()
}

This example is borrowed from our examples.

NOTE: I’ve used try/finally purposely instead of Kotlin’s use extension method

MQTT Client

In Vert.x 3.4 we added the MQTT server, 3.5 completes the MQTT story with the MQTT client:

MqttClient mqttClient = MqttClient.create(vertx,
   new MqttClientOptions()
     .setPort(BROKER_PORT)
     .setHost(BROKER_HOST)).connect(ar ->if (ar.succeeded()) {
    System.out.println("Connected to a server");

    mqttClient.publish(
      MQTT_TOPIC,
      Buffer.buffer(MQTT_MESSAGE),
      MqttQoS.AT_MOST_ONCE,
      false,
      false,
      s -> mqttClient.disconnect(d -> System.out.println("Disconnected from server")));
  } else {
    System.out.println("Failed to connect to a server");
    ar.cause().printStackTrace();
  }
});

You can find MQTT client and server examples here

Web API contracts

With the new OpenAPI router factory we can focus on the API implementation and not on the validation of the input. The usage is quite simple:

OpenAPI3RouterFactory.createRouterFactoryFromFile(vertx, "petstore.yaml", ar -> {
  if (ar.succeeded()) {
    // Spec loaded with success
    OpenAPI3RouterFactory routerFactory = ar.result();

    // add your APIand security handlers to the factory

    // add it to a server
    vertx.createHttpServer()
      .requestHandler(routerFactory.getRouter()::accept)
      .listen();
  } else {
    // Something went wrong during router factory initialization
  }
});

Now as a developer you only need to care about the API and not on the validation. The OpenAPI router will ensure that a request to an API will first to the contract before your handler is invoked.

Java 9 support

Java 9 was released a few days ago and the Vert.x stack has been carefully tested on Java 9 and most of our components run on Java 9 (Groovy does not run well on Java 9, please see the support matrix)

As a bonus you can now use HTTP/2 out of the box with JDK SSL!

You can also use Vert.x jars as anonymous modules.

Event driven JSON Parsing

We provide now an event driven JSON Parser emitting parse events that is very handy when you need to handle very large JSON structures and you don’t want to buffer it which introduce extra latency and increase the memory consumption.

The parser allows you to switch between fine grained JSON parse events or full structures, for instance you can parse an array of object very efficiently:

JsonParser parser = JsonParser.newParser();

// The parser will handle JSON objects as values
parser.objectValueMode();

parser.handler(event -> {
  switch (event.type()) {
    caseSTART_ARRAY:// Start the arraybreak;
    caseEND_ARRAY:// End the arraybreak;
    caseVALUE:// Handle each objectbreak;
  }
});

Single SQL operations

Single SQL operations (aka one-shot) have been drastically simplified: most of the SQLOperations operations can now be performed directly on the SQLClient:

client.queryWithParams("SELECT AVG(VALUE) AS VALUE FROM RATING WHERE MOVIE_ID=?", new JsonArray().add(id), ar2 -> {
  if (ar.succeeded()) {
    int value = ar.result().get(0).getInteger("VALUE");
    // Continue
  }
});

Under the hood, the client takes care of the pool acquire/release interaction for you.

Native transport and domain sockets

We now support native transports on Linux (Epoll) and MacOS (KQueue), as well as UNIX domain sockets for NetServer/NetClient (HttpServer/HttpClient should support UNIX domain sockets soon).

Auth handler chaining

There are times when you want to support multiple authN/authZ mechanisms in a single application.

Vert.x Web supports auth handlers chaining

Vert.x config improvements

Vert.x Config allows configuring your application by assembling config chunks from different locations such as file, http, zookeeper…

In this version, we have added the support for Consul and Vault.

With the Consul config store, you can retrieve your configuration from a Consul server - so in other words, distribute the configuration from your orchestration infrastructure.

The Vault config store lets you retrieve secrets avoiding hard coding secrets or distributing credentials using an insecure way. Vault enforces the security of your secrets and only allowed applications can retrieve them. In other words, now you can keep your secrets secret.

ACKs

I want on behalf of the team to thank all the contributors for this release including the Google Summer of Code students (Pavel Drankov, Francesco Guardiani and Yunyu Lin) that delivered an impressive work.

Finally

The release notes

Docker images are also available on the Docker Hub. The Vert.x distribution is also available from SDKMan and HomeBrew.

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Eclipse Vert.x meets GraphQL

$
0
0

I recently added GraphQL support to Gentics Mesh and I thought it would be a good idea to boil down the essence of my implementation in example so that I could share it in a simpler form. The example I’m about to show will not cover all aspects that I have added to the Gentics Mesh API (e.g. paging, search and error handling) but it will give you a basic overview of the parts that I put together. GraphQL does not require a GraphDB even if the name might suggest it.

Using a graphdb in combination with GraphQL does nevertheless provide you with some advantages which I will highlight later on.

What is GraphQL? What is it good for?

GraphQL as the name suggests is a new query language which can be used to load exactly the amount of data which you ask for.

The query is defined in way that the query fields correlate to the JSON data that is being retrieved. In our StarWars Demo domain model this query will load the name of human 1001 which is Darth Vader.

{
  vader:human(id: 1001) {
      name
  }
}

Would result in:

{
  "data": {
    "vader": {
      "name": "Darth Vader"}
  }
}

The Demo App

The demo application I build makes use of the graphql-java library. The data is being stored in a graph database. I use OrientDB in combination with the OGM Ferma to provide a data access layer. GraphQL does not necessarily require a graph database but in this example I will make use of one and highlight the benefits of using a GraphDB for my usecase.

You can find the sources here: https://github.com/Jotschi/vertx-graphql-example

Data

The StarWarsData class creates a Graph which contains the Star Wars Movies and Characters, Planets and their relations. The model is fairly simple. There is a single StarWarsRoot vertex which acts as a start element for various aggregation vertices: Movies are stored in MovieRoot, Planets in PlanetsRoot, Characters are stored in HumansRoot and DroidsRoot.

The model classes are used for wrappers of the specific graph vertices. The Ferma OGM is used to provide these wrappers. Each class contains methods which can be used to traverse the graph to locate the needed vertices. The found vertices are in turn again wrapped and can be used to locate other graph elements.

Schema

The next thing we need is the GraphQL schema. The schema describes each element which can be retrieved. It also describes the properties and relationships for these elements.

The graphql-java library provides an API to create the object types and schema information.

private GraphQLObjectType createMovieType(){
  returnnewObject().name("Movie")
    .description("One of the films in the Star Wars universe.")// .title
    .field(newFieldDefinition().name("title")
        .description("Title of the episode.")
        .type(GraphQLString)
        .dataFetcher((env) ->{
          Movie movie = env.getSource();
          return movie.getName();
        }))

    // .description
    .field(newFieldDefinition().name("description")
        .description("Description of the episode.")
        .type(GraphQLString))

    .build();
}

A type can be referenced via a GraphQLTypeReference once it has been created and added to the schema. This is especially important if you need to add fields which reference other types. Data fetchers are used to access the context, traverse the graph and retrieve properties from graph elements.

Another great source to learn more about the schema options is the GarfieldSchema example.

Finally all the created types must be referenced by a central object type QueryType. The query type object is basically the root object for the query. It defines what query options are initially possible. In our case it is possible to load the hero of the sage, specific movies, humans or droids.

Verticle

The GraphQLVerticle is used to accept the GraphQL request and process it.

The verticle also contains a StaticHandler to provide the Graphiql Browser web interface. This interface will allow you to quickly discover and experiment with GraphQL.

The query handler accepts the query JSON data.

An OrientDB transaction is being opened and the query is executed:

demoData.getGraph().asyncTx((tx) -> {
    // Invoke the query and handle the resulting JSON
    GraphQL graphQL = newGraphQL(schema).build();
    ExecutionInput input = new ExecutionInput(query, null, queryJson, demoData.getRoot(), extractVariables(queryJson));
    tx.complete(graphQL.execute(input));
}, (AsyncResult rh) -> {
    ...
});

The execute method initially needs a context variable. This context passed along with the query. In our case the context is the root element of the graph demoData.getRoot(). This context element also serves as the initial source for our data fetchers.

.dataFetcher((env) -> {
    StarWarsRoot root = env.getSource();
    return root.getHero();
}))

The data fetchers for the hero type on the other hand will be able to access the hero element since the fetcher above returned the element. Using this mechanism it is possible to traverse the graph. It is important to note that each invocation on the domain model methods will directly access the graph database. This way it is possible to influence the graph database query down to the lowest level. When omitting a property from the graphql query it will not be loaded from the graph. Thus there is no need to write an additional data access layer. All operations are directly mapped to graph database.

The StarWarsRoot Ferma class getHero() method in turn defines a TinkerPop Gremlin traversal which is used to load the Vertex which represents the hero of the Star Wars saga.

Apache TinkerPop
Apache TinkerPop is an open source, vendor-agnostic, graph framework / API which is supported by many graph database vendors. One part of TinkerPop is the Gremlin traversal language which is great to query graph databases.
...
public Droid getHero(){
    // Follow the HAS_ROOT edge and return the first found Vertex which could be found. // Wrap the Vertex explicitly in the Droid Ferma class.  returntraverse((g) -> g.out(HAS_HERO)).nextOrDefaultExplicit(Droid.class, null);
}
...

Once the query has been executed the result handler is being invoked. It contains some code to process the result data and potential errors. It is important to note that a GraphQL query will always be answered with a 2xx HTTP status code. If an element which is being referenced in the query can’t be loaded an error will be added to the response JSON object.

Testing

Testing is fairly straight forward. Although there are multiple approaches. One approach is to use unit testing directly on the GraphQL types. Another option is to run queries against the endpoint.

The GraphQLTest class I wrote will run multiple queries against the endpoint. A Parameterized JUnit test is used iterate over the queries.

A typical query does not only contain the query data. The assertions on the response JSON are directly included in query using plain comments.

I build an AssertJ assertion to check the comments of a query and verify that the assertion matches the response.

assertThat(response).compliesToAssertions(queryName);

Run the example

You can run the example by executing the GraphQLServer class and access the Graphiql browser on http://localhost:3000/browser/

Where to go from here?

The example is read-only. GraphQL also supports data mutation which can be used to actually modify and store data. I have not yet explored that part of GraphQL but I assume it might not be that hard to add mutation support to the example.

Additionally it does not cover how to actually make use of such API. I recently updated my Vert.x example which shows how to use Vert.x template handlers to build a small server which renders some pages using data which was loaded via GraphQL.

Thanks for reading. If you have any further questions or feedback don’t hesitate to send me a tweet to @Jotschi or @genticsmesh.

TCP Client using Eclipse Vert.x, Kotlin and Gradle build

$
0
0

As part of my hobby project to control RaspberryPi using Google Home Mini and/or Alexa, I wanted to write a very simple TCP client that keeps a connection open to one of my custom written server in cloud (I will write another blog post to cover the server side on a later date). The requirement of the client is to send a shared secret upon connecting and then keep waiting for message from server. Vert.x, Kotlin and Gradle allow rapid development of such project. The generated jar can be executed on Raspberry Pi. These steps outline the project setup and related source code to showcase a Vert.x and Kotlin project with Gradle.

Project Directory Structure

From command line (or via Windows Explorer, whatever you prefer to use) create a directory for project,for instance vertx-net-client. Since we are using Kotlin, we will place all Kotlin files in src/main/kotlin folder. The src/main/resources folder will contain our logging configuration related files.

cd vertx-net-client
mkdir -p src/main/kotlin
mkdir -p src/main/resources

Project Files

We need to add following files in the project

  • .gitignore If you want to check your project into git, you may consider adding following .gitignore file at root of your project
  • logback.xml This example is using slf4j and logback for logging. If you decide to use it in your project, you may also add following logback.xml file in src/main/resources. Modify it as per your requirements. This example will log on console.

Gradle Setup

We will use Gradle build system for this project. If you don’t already have Gradle available on your system, download and unzip gradle in a directory of your choice ($GRADLE_HOME is used here to represent this directory). This gradle distribution will be used as a starting point to create Gradle wrapper scripts for our project. These scripts will allow our project to download and use correct version of gradle distribution automatically without messing up system. Really useful when building your project on CI tool or on any other developer’s machine.

Run following command in project’s directory

$GRADLE_HOME/bin/gradle wrapper

The above commands will generate following files and directories.

gradle/  gradlew  gradlew.bat

Gradle build file build.gradle

Create (and/or copy and modify) following build.gradle in your project’s root directory. Our example gradle build file is using vertx-gradle-plugin.

In the project directory, run following command to download local gradle distribution:

./gradlew
(or `.\gradlew.bat` if in Windows) At this stage we should have following file structure. This is also a good time to commit changes if you are working with git. * `.gitignore` * `build.gradle` * `gradle/wrapper/gradle-wrapper.jar` * `gradle/wrapper/gradle-wrapper.properties` * `gradlew` * `gradlew.bat` * `src/main/resources/logback.xml` Now that our project structure is ready, time to add the meat of the project. You may use any IDE of your choice. My preference is IntelliJ IDEA. Create a new package under `src/main/kotlin`. The package name should be adapted from the following section of `build.gradle`
vertx {
    mainVerticle = "info.usmans.blog.vertx.NetClientVerticle"
}

From the above example, the package name is info.usmans.blog.vertx

Add a new Kotlin Class/file in src/main/kotlin/info/usmans/blog/vertx as NetClientVerticle.kt

The contents of this class is as follows

Explaining the Code

The fun main(args: Array) is not strictly required, it quickly allows running the Vert.x verticle from within IDE. You will also notice a small hack in the method for setting system property vertx.disableDnsResolver which is to avoid a Netty bug that I observed when running on Windows machine and remote server is down. Of course, since we are using vertx-gradle-plugin, we can also use gradle vertxRun to run our verticle. In this case the main method will not get called.

The override fun start() method calls fireReconnectTimer which in turn calls reconnect method. reconnect method contains the connection logic to server as well as it calls fireReconnectTimer if it is unable to connect to server or disconnects from server. In reconnect method the socket.handler gets called when server send message to client.

socket.handler({ data ->
                        logger.info("Data received: ${data}")
                        //TODO: Do the work here ...
               })

Distributing the project

To create redistributable jar, use ./gradlew shadowJar command. Or if using IntelliJ: from Gradle projects, Tasks, shadow, shadowJar (right click run). This command will generate ./build/libs/vertx-net-client-fat.jar.

Executing the client

The client jar can be executed using following command:

java -DserverHost=127.0.0.1-DserverPort=8888-DconnectMessage="hello" -jar vertx-net-client-full.jar

If you wish to use SLF4J for Vert.x internal logging, you need to pass system property vertx.logger-delegate-factory-class-name with value of io.vertx.core.logging.SLF4JLogDelegateFactory. The final command would look like:

java -DserverHost=127.0.0.1-DserverPort=8888-DconnectMessage="hello" -Dvertx.logger-delegate-factory-class-name="io.vertx.core.logging.SLF4JLogDelegateFactory" -jar vertx-net-client-full.jar

You can configure Vert.x logging levels in logback.xml file if required.

Conclusion

This post describes how easy it is to create a simple TCP client using Vert.x, Kotlin and Gradle build system. Hopefully the techniques shown here will serve as a starting point for your next DIY project.

Info
This post is adapted and reproduced from author’s blog post

Eclipse Vert.x based Framework URL Shortener Backend

$
0
0

AWS Lambda & Vertx Framework URL Shortener Backend

Intro

Recently I stumbled upon Vertx. Event-driven, asynchronized, lightweight, reactive, highly performant, polyglot application framework. Ideal for writing micro-services. I played around with it for a while and really enjoyed the concept of serverless applications.

I developed a few apps and cases and started to wonder how to run and deploy them so that I get a 100% reliable service. I suddenly remembered the tech seminar that I attended recently, specifically session about serverless apps with AWS (Amazon Web Services) Lambda. Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. Fairly similar concepts Vertx and AWS Lambda, so maybe they complement each other? As it turns out they do, Vertx can get most of your Lambdas

Using the Serverless Framework to create, manage and deploy your new Lambdas I was able to get this micro-service up and running in no time.

Enough with the talk, lets see the implementation.

Code

Handler Class, entry point of AWS Request.

The first issue that I had was the sync Event Handler that is provided by AWS. So I had to by pass it with Future. In the Handler class I first initiate Vertx instance in a static block and deploy few Verticles that will do the work. This class only receives the event, extracts needed data from the request and passes the data to Vertx EventBus. After the Consumers handle the request, Handler class will generate a proper response and finish the AWS request.

Line 4-18: This is where Vertx instance is created, Verticles are deployed and Async JDBC client is created. I figured out that it is better to created JDBC client here as in some cases I was timeout when that logic was in the Verticle start method.

Line 27-36: These are helper lines, parsing and formatting the data so I can pass it to the Verticles.

Line 38-45: I have decided to map the consumers to the address that is made of request method and url path, example POST/api. This means each API request is mapped to its own consumer in Verticle class.

Line 47-77: This is nothing but a block of code that handles the response that was passed from Verticles to the Future and generates the final response that will be return to API Gateway.

UrlService, Vertx Verticle.

Verticle class is pretty forward. Consumers that will process the message, methods for working with JDBC and helper methods for hashing/dehashing id. The logic behind url shortening is fairly simple here. Each long url is stored in database with a unique id and few additional columns. Row id is hashed and returned as short url. When retrieving long url hash is decoded to row id and long url is retrieved. Later user is redirected to long url. With this implementation, on 6 char short url (characters after the domain) you get 62^6 combinations which is 56 800 235 584 rows for storing your urls. TinyURL is currently at 6 long char urls (characters after domain). You can of course implement methods for reusing and recycling ids.

As said, this is all fairly straight forward, if you are already familiar with Vertx. If you are thinking why have I repeated the code for establish a JDBC connection, here is the explanation (line: 10-16). I was getting Timeouts when creating JDBC connection in Verticles. To avoid this I also added this code to my Handler class. This way connection is created there and because of the Vertx implementation any later attempt to create it again will result in just getting the instances from the first invocation. This escaped the need to pass the instances directly from the Handler class when creating Verticle instances.

Serverless configuration.

Lastly I would like to share the serverless.yml, confirmation file that allows seamlessly deploy and management of AWS Lambda. With just a few commands and lines of configuration you are able to configure all necessary steps for deploying your AWS Lambda. Framework takes care of making configuration of Api-Gateway and other AWS hassle that would otherwise needed to be done by hand. In this case Lambda is invoked by HTTP events.

Performance and Tests

Vertx async capabilities eased the stress and memory needs of traditional AWS Lambdas with sync methods. After performing load tests Lambdas that were written using Vertx framework preformed 10% faster and consumed 40% less memory. As I have read somewhere in Vertx documentation, Sync methods will definitely finish the first request faster but in total Async will be faster in the end. This savings in memory and time will definitely reduce the cost of running your Lambdas and the little overhead with additional code is for sure worth it.

Conclusion

To follow the pace of demanding needs for fast and resilient services we need to move from traditional Monoliths. Embracing the micro service architecture alone will not cut it, not anymore. With the rise and rapid advancement of Cloud solutions it has never been so easy to make a truly Serverless systems build upon network of micro services. As you have seen Vertx with its async API makes the full advantage of AWS Lambdas, embracing them while also improving the performance and lowering the costs. With the help from Serverless Framework writing, deploying and managing your Lambdas has never been so easy.

If you are interested in the whole project, you can find it on GitHub.

Info
this is a re-publication of the following blog post

Eclipse Vert.x 3.5.1 released!

$
0
0

We have just released Vert.x 3.5.1!

Fixes first!

As usual this release fixes bugs reported in 3.5.0, see the release notes.

JUnit 5 support

This release introduces the new vertx-junit5 module.

JUnit 5 is a rewrite of the famous Java testing framework that brings new interesting features, including:

  • nested tests,
  • the ability to give a human-readable description of tests and test cases (and yes, even use emojis),
  • a modular extension mechanism that is more powerful than the JUnit 4 runner mechanism (@RunWith annotation),
  • conditional test execution,
  • parameterized tests, including from sources such as CSV data,
  • the support of Java 8 lambda expressions in the reworked built-in assertions API,
  • support for running tests previously written for JUnit 4.

Suppose that we have a SampleVerticle verticle that exposes a HTTP server on port 11981. Here is how we can test its deployment as well as the result of 10 concurrent HTTP requests:

@Test@DisplayName("🚀 Deploy a HTTP service verticle and make 10 requests")
void useSampleVerticle(Vertx vertx, VertxTestContext testContext) {
  WebClient webClient = WebClient.create(vertx);
  Checkpoint deploymentCheckpoint = testContext.checkpoint();

  Checkpoint requestCheckpoint = testContext.checkpoint(10);
  vertx.deployVerticle(new SampleVerticle(), testContext.succeeding(id -> {
    deploymentCheckpoint.flag();

    for (int i = 0; i <10; i++) {
      webClient.get(11981, "localhost", "/")
        .as(BodyCodec.string())
        .send(testContext.succeeding(resp -> {
          testContext.verify(() -> {
            assertThat(resp.statusCode()).isEqualTo(200);
            assertThat(resp.body()).contains("Yo!");
            requestCheckpoint.flag();
          });
        }));
    }
  }));
}

The test method above benefits from the injection of a working Vertx context, a VertxTestContext for dealing with asynchronous operations, and the guarantee that the execution time is bound by a timeout which can optionally be configured using a @Timeout annotation.

The test succeeds when all checkpoints have been flagged. Note that vertx-junit5 is agnostic of the assertions library being used: you may opt for the built-in JUnit 5 assertions or use a 3rd-party library such as AssertJ as we did in the example above.

You can checkout the source on GitHub, read the manual and learn from the examples.

Web API Contract enhancements

The package vertx-web-api-contract includes a variety of fixes, from schema $ref to revamped documentation. You can give a look at list of all fixes/improvements here and all breaking changes here.

From 3.5.1 to load the openapi spec and instantiate the Router you should use new method OpenAPI3RouterFactory.create() that replaces old methods createRouterFactoryFromFile() and createRouterFactoryFromURL(). This new method accepts relative paths, absolute paths, local URL with file:// and remote URL with http://. Note that if you want refeer to a file relative to your jar’s root, you can simply use a relative path and the parser will look out the jar and into the jar for the spec.

From 3.5.1 all settings about OpenAPI3RouterFactory behaviours during router generation are inglobed in a new object called RouterFactoryOptions. From this object you can:

  • Configure if you want to mount a default validation failure handler and which one (methods setMountValidationFailureHandler(boolean) and setValidationFailureHandler(Handler))
  • Configure if you want to mount a default 501 not implemented handler and which one (methods setMountNotImplementedFailureHandler(boolean) and setNotImplementedFailureHandler(Handler))
  • Configure if you want to mount ResponseContentTypeHandler automatically (method setMountResponseContentTypeHandler(boolean))
  • Configure if you want to fail during router generation when security handlers are not configured (method setRequireSecurityHandlers(boolean))

After initialization of route, you can mount the RouterFactoryOptions object with method routerFactory.setOptions() when you want before calling getRouter().

RxJava deprecation removal

It is important to know that 3.5.x will be the last release with the legacy xyzObservable() methods:

@Deprecated()
public ObservablelistenObservable(int port, String host);

has been replaced since Vert.x 3.4 by:

public SinglerxListen(int port, String host);

The xyzObservable() deprecated methods will be removed in Vert.x 3.6.

Wrap up

Vert.x 3.5.1 release notes and breaking changes:

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Docker images are also available on the Docker Hub. The Vert.x distribution is also available from SDKMan and HomeBrew.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Happy coding !

Google Summer of Code 2018

$
0
0

It’s this time of year again! Google Summer of Code 2018 submission period has just started!

Submit through the Eclipse organization

This year, the Eclipse Vert.x project participates through the Eclipse organization. Make sure to review our GSoC 2018 ideas and to submit before March, 27!

Assessment application

As we did before, we ask candidates to implement a simple Vert.x application. This helps us make sure candidates have a basic understanding of asynchronous programming and the Vert.x toolkit. But submit your proposal even if not done with the assessment application! Google will not extend the submission period but we can continue reviewing assessments while evaluating the submitted proposals.

Questions?

If you have questions, feel free to ask possible mentors via email or on our community channels.

All the details for this year (and ideas from past years) can be found on the Vert.x GSoC page.

Looking forward to your proposals!

Vert.x 2.1.6 released !

$
0
0

The Vert.x team is pleased to announce the release of Vert.x 2.1.6.

This is a maintenance release on the 2.x branch that fixes a few bugs and is designed for Vert.x 2 production users who cannot upgrade to 3.0 immediately.

For the latest production version for new projects please see Vert.x 3.0.

Fixes in this release include:

  • runZip - fix bugs in unpacking zips
  • HttpClient - make sure writeHead is set to true before connect
  • Upgrade to Hazelcast 3.5 to fix bug in Multimap state.
  • Workaround for Hazelcast bug which could result in inconsistent cluster state if multiple nodes shutdown concurrently
  • Clustering fixes related to clearing up state in case of event bus connections closing and on close of event bus.
  • Fix message replies to nodes other than the node the SockJS bridge is deployed on.

The artifacts has been deployed to Maven Central, and you can get the distribution on Bintray.

Vert.x3 Web easy as Pi

$
0
0

Vert.x Web distinguishes itself from traditional application servers like JavaEE by just being a simple extension toolkit to Vert.x, which makes it quite lightweight and small but nevertheless very powerful.

One can create simple applications targeting small devices such as Raspberry Pi without having to write much code but still very fast as it is expected from any Vert.x application.

Let’s for example think of making a realtime cpu load visualization web app. For this example we need a few things:

To bootstrap this project we start by creating the pom.xml file. A good start is always to consult the examples, and you should end up with something like:

...
<groupId>io.vertx.bloggroupId><artifactId>rpiartifactId><version>1.0version><dependencies><dependency><groupId>io.vertxgroupId><artifactId>vertx-coreartifactId><version>3.0.0version>dependency><dependency><groupId>io.vertxgroupId><artifactId>vertx-webartifactId><version>3.0.0version>dependency>dependencies>
...

At this moment you can start coding the application using the standard maven source src/main/java and resource src/main/resouces locations. And add a the class io.vertx.blog.RpiVerticle to the project:

publicclassRPiVerticleextendsAbstractVerticle{

  privatestaticfinal OperatingSystemMXBean osMBean;

  static {
    try {
      osMBean = ManagementFactory.newPlatformMXBeanProxy(ManagementFactory.getPlatformMBeanServer(),
          ManagementFactory.OPERATING_SYSTEM_MXBEAN_NAME, OperatingSystemMXBean.class);
    } catch (IOException e) {
      thrownew RuntimeException(e);
    }
  }

  @Overridepublicvoidstart(){

    Router router = Router.router(vertx);

    router.route("/eventbus/*").handler(SockJSHandler.create(vertx)
        .bridge(new BridgeOptions().addOutboundPermitted(new PermittedOptions().setAddress("load"))));

    router.route().handler(StaticHandler.create());

    vertx.createHttpServer().requestHandler(router::accept).listen(8080);

    vertx.setPeriodic(1000, t -> vertx.eventBus().publish("load",
        new JsonObject()
            .put("creatTime", System.currentTimeMillis())
            .put("cpuTime", osMBean.getSystemLoadAverage())));
  }
}

So let’s go through the code, first in the static constructor we initialize the MXBean that will allow us to collect the current System Load Average, then on the start method we create a Vert.x Web Router and define that for all requests starting with /eventbus should be handled by the SockJS server, which we then bridge to the Vert.x EventBus and allow outbound messages addressed to the load address.

Since our application is a web application we will also server some static content with the StaticHandler and we finally start a HTTP server listening on port 8080.

So now all we are missing is a way to push real time data to the client so we end up creating a Periodic task that repeats every 1000 milliseconds and sends some JSON payload to the address "load".

If you run this application right now you won’t see much since there is no frontend yet, so let’s build a very basic index.html:

...
var eb = new vertx.EventBus(window.location + "eventbus");

eb.onopen = function(){
  eb.registerHandler("load", function(msg){
    if (data.length === 25) {
      // when length of data equal 25 then pop data[0]
      data.shift();
    }
    data.push({
      "creatTime": newDate(msg.creatTime),
      "cpuTime": msg.cpuTime
    });
    render();
  });
};
...

Let’s walk again the code, we start by opening a EventBus bridge over SockJS and register a handler data to consume messages sent to that address. Once such a message arrives we do some house keeping to avoid filling our browser memory and then add the incoming message to the data queue and triger a rendering of the data. There is however one interesting issue here, since the message payload is JSON there is no native support for Date objects so we need to do some parsing from what arrives from the server. In this case the server sends a simple time since epoch number, but one can choose any format he likes.

At this moment you can build and package your app like mvn clean package, then deploy it to your raspberrypi like: scp target/rpi-1.0-fat.jar pi@raspberrypi:~/ and finally run it: java -jar rpi-1.0-fat.jar.

Open a browser to see the realtime graph!

Screenshot

Vert.x 3 init.d Script

$
0
0

Let’s say you have a Vert.x 3 application you want to install on a Linux server. But you want the old school way (I mean not the Docker way ☺). So, in other words, you need an init.d script. This post proposes an init.d script that you can use to start/stop/restart a Vert.x 3 application.

Prerequisites

The proposed script assumes your application is packaged as a fat jar. So, your application is going to be launched using java -jar your-fat-jar ....

The script

The init.d scripts have to reply to a set of commands:

  • start : starts the application (if not yet started)
  • stop : stops the application (if started)
  • status : let you know if the application is started or not
  • restart : restart the application

These commands are invoked using:

service my-service-script start
service my-service-script stop
service my-service-script status
service my-service-script restart

In general, service scripts are hooked in the boot and shutdown sequences to start and stop automatically during the system starts and stops.

So, enough talks, let’s look at the script:

Using the script

First download the script from the here.

You need to set a couple of variables located at the beginning of the file:

# The directory in which your application is installedAPPLICATION_DIR="/opt/my-vertx-app"# The fat jar containing your applicationAPPLICATION_JAR="maven-verticle-3.0.0-fat.jar"# The application argument such as -cluster -cluster-host ...APPLICATION_ARGS=""# vert.x options and system properties (-Dfoo=bar).VERTX_OPTS=""# The path to the Java command to use to launch the application (must be java 8+)JAVA=/opt/java/java/bin/java

The rest of the script can stay as it is, but feel free to adapt it to your needs. Once you have set these variables based on your environment, move the file to /etc/init.d and set it as executable:

sudo mv my-vertx-application /etc/init.d
sudo chmod +x my-vertx-application

Then, you should be able to start your application using:

sudo service my-vertx-application start

Depending to your operating system, adding the hooks to the boot and shutdown sequence differs. For instance on Ubuntu you need to use the update-rc.d command while on CentOS chkconfig is used

That’s all, enjoy !

Viewing all 158 articles
Browse latest View live