Oracle Blogs | Oracle Developers Blog


19th September 2018 |

Podcast: DevOps to NoOps: State of Play

What is the current state of play in DevOps? What forces are having the greatest impact on the evolution and adoption of DevOps? Is NoOps a valid prospect for the future? Those questions notwithstanding, one thing is certain: while everybody is talking about DevOps, getting from talk to action is proving to be a substantial hurdle for many organizations.

"What I see so far is lack of knowledge," says podcast panelist Davide Fiorentino. "People don't know the tools. Most of the time they don't know what they are talking about." In some cases the problem can be a lot like trying to turn a battleship.

As panelist Bert Jan Schrijver explains, "it's typically easier for smaller organizations to move to a definite way of working, and a bit harder for larger organizations," where the stakes can be high. "I typically try to find organization projects to work on where the IT department has no more than 50 to 60 people. Then there's a good opportunity to get the organization in the right mindset and to get everybody on deck."

But in Bert's experience, smaller doesn't always mean easier. "It can be easier to convince 1500 people who have the same mindset than 50 people who are basically against all that you're saying."

In that situation management support can be invaluable. "It's always been about having unconditional support in all levels of the organization, especially in management," Bert says. "Because when you're changing an organization you're always going to hit resistance. And if you're going to get resistance from somebody who's higher up in the tree than you, then you better have support from that person's manager."

"The key to working as a DevOps team is not being blocked by people or departments outside your team that you don't have influence on," Bert adds. "A true DevOps team is a cross-functional team which is a team that can do anything necessary to go from idea to working software in production."

"That's a very important point!" agrees Michael. "I really appreciate the ops guys having strong experiences and skills about non-functional parts of the solution, and running and scaling out infrastructure."

Of course, there is a lot more to getting from DevOps talk to real transformation, and what you're reading here is only a fraction of the insight Davide, Bert, and Michael offer in this podcast. So strap on your headphones and dig in.

BTW: Each of these panelists have sessions on the schedule for Oracle Code One, Oct 22-25, 2018 in San Francisco, CA. If you haven't already done so, there's plenty of time to register for that event. You'll find information on those sesssion below.

Special thanks to my Developer Community colleague Javed Mohammed for his help in organizing this program, and for co-hosting the discussion.

The Panelists Davide Fiorentino
Principal DevOps Engineer, Cambridge Broadband Networks Limited (CBNL)
Consultant, Food and Agriculture Organization, United Nations

Twitter LinkedIn

Code One Session:

  • DevOps in Action [BOF5289]
    Monday, Oct 22, 7:30 p.m. - 8:15 p.m. | Moscone West - Room 2009
Michael Hutterman
Java Champion
Oracle Developer Champion
Independent DevOps Consultant

Twitter LinkedIn

Code One Session:
  • Continuous Delivery/DevOps: Live Cooking Show [DEV4762]
    Monday, Oct 22, 2:30 p.m. - 3:15 p.m. | Moscone West - Room 2010
Bert Jan Schrijver
Java Champion
Oracle Developer Champion
CTO, OpenValue
Software Craftsman, JPoint

Twitter LinkedIn

Code One Sessions:
  • Better Software, Faster: Principles of Continuous Delivery and DevOps [DEV5118]
    Monday, Oct 22, 4:00 p.m. - 4:45 p.m. | Moscone West - Room 2010
  • Angular for Java Developers [DEV4345]
    Wednesday, Oct 24, 10:30 a.m. - 11:15 a.m. | Moscone West - Room 2003
  • Microservices in Action at the Dutch National Police [DEV4344]
    Monday, Oct 22, 2:30 p.m. - 3:15 p.m. | Moscone West - Room 2007
Javed Mohammed
Podcast Co-Host
Systems Community Manager, Oracle

Twitter LinkedIn 

Additional Resources Coming Soon

Talking about microservices is a useful thing. But at some point the talk has to stop and the real work has to begin. And that's when the real challenges appear. In this upcoming podcast a panel of experts discusses how to overcome the challenges inherent in designing microservices that will fulfill their potential.


Never miss an episode! The Oracle Developer Community Podcast is available via:


14th September 2018 |

Connecting to Autonomous Transaction Processing Database from a Node. ...

In this tutorial I demonstrate how to connect an app written in Python, Node.js or PHP running in Oracle Cloud Infrastructure(OCI) to an Autonomous Transaction Processing (ATP) Database running in Oracle Cloud. To complete these steps, it is assumed you have either a baremetal or VM shape running Oracle Linux with a public IP address in Oracle Cloud Infrastructure, and that you have access to Autonomous Transaction Processing Database Cloud Service. I used Oracle Linux 7.5

We've recently added Oracle Instant Client to the Oracle Linux yum mirrors in each OCI region, which has simplified the steps significantly. Previously, installing Oracle Instant Client required either registering a system with ULN or downloading from OTN, each with manual steps to accept license terms. Now you can simply use yum install directly from Oracle Linux running in OCI. For this example, I use a Node.js app, but the same principles apply to Python with cx_Oracle, PHP with php-oci8 or any other language that can connect to Oracle Database with an appropriate connector via Oracle Instant Client.

Overview Installing Node.js, node-oracledb and Oracle Instant Client Grab the Latest Oracle Linux Yum Mirror Repo File

This steps will ensure you have an updated repo file local to your OCI region with a repo definition for OCI-included software such as Oracle Instant Client. Note that I obtain the OCI region from the instance metadata service via an HTTP endpoint that every OCI instance has access to via the address After connecting to your OCI compute instance via ssh, run the following commands:

cd /etc/yum.repos.d sudo mv public-yum-ol7.repo public-yum-ol7.repo.bak export REGION=`curl -s | jq -r '.region'| cut -d '-' -f 2` sudo -E wget http://yum-$$REGION-ol7.repo

Enable yum repositories for Node.js and Oracle Instant Client

Next, enable the required repositories to install Node.js 10 and Oracle Instant Client

sudo yum install -y yum-utils sudo yum-config-manager --enable ol7_developer_nodejs10 ol7_oci_included

Install Node.js, node-oracledb and Oracle Instant Client

To install Node.js 10 from the newly enabled repo, we'll need to make sure the EPEL repo is disabled. Otherwise, Node.js from that repo may be installed and that's not the Node we are looking for. Also, note the name of the node-oracledb package for Node.js 10 is node-oracledb-12c-node10. Oracle Instant Client will be installed automatically as a dependency of node-oracledb.

sudo yum --disablerepo="ol7_developer_EPEL" -y install nodejs node-oracledb-12c-node10

Add Oracle Instant Client to the runtime link path. sudo sh -c "echo /usr/lib/oracle/12.2/client64/lib > /etc/" sudo ldconfig

Using Oracle Instant Client Download Wallet and Configure Wallet Location

To connect to ATP via SQL*Net, you'll need Oracle client credentials. An ATP service administrator can download these via the service console. See this documentation for more details.

Figure 1. Downloading Client Credentials (Wallet) from Autonomous Transaction Processing Service Console

Once you've obtained the wallet archive for your ATP Database, copy it to your OCI instance, unzip it and set the permissions appropriately. First prepare a location to store the wallet.

sudo mkdir -pv /etc/ORACLE/WALLETS/ATP1 sudo chown -R opc /etc/ORACLE

Copy the wallet from the machine to which you've downloaded it to the OCI instance. Here I'm copying the file from my development machine using scp. Note that I'm using an ssh key file that matches the ssh key I created the instance with.

Note: this next command is run on your development machine to copy the downloaded Wallet zip file to your OCI instance. In my case, was downloaded to ~/Downloads on my MacBook. scp -i ~/.ssh/oci/oci ~/Downloads/ opc@<OCI INSTANCE PUBLIC IP>:/etc/ORACLE/WALLETS/ATP1

Returning to the OCI instance, unzip the wallet and set the permissions appropriately.

cd /etc/ORACLE/WALLETS/ATP1 unzip sudo chmod -R 700 /etc/ORACLE

Edit sqlnet.ora to point to the Wallet location, replacing ?/network/admin. After editing sqlnet.ora should look something like this.


Set the TNS_ADMIN environment variable to point Instant Client the Oracle configuration directory as well as NODE_PATH so that the node-oracledb module can be found by our Node.js program.

export TNS_ADMIN=/etc/ORACLE/WALLETS/ATP1 export NODE_PATH=`npm root -g`

Create and run a Node.js Program to Test Connection to ATP

Create a file, select.js based on the example below. Either assign values to the environment variables NODE_ORACLEDB_USER, NODE_ORACLEDB_PASSWORD, and NODE_ORACLEDB_CONNECTIONSTRING to suit your configuration or edit the placeholder values USERNAME, PASSWORD and CONNECTIONSTRING in the code below. The former being the username and password you've been given for ATP and the latter being one of the service descriptors in the $TNS_ADMIN/tnsnames.ora file.

'use strict'; const oracledb = require('oracledb'); async function run() { let connection; try { connection = await oracledb.getConnection({ user: process.env.NODE_ORACLEDB_USER || "USERNAME", password: process.env.NODE_ORACLEDB_PASSWORD || "PASSWORD", connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING || "CONNECTIONSTRING" }); let result = await connection.execute("select sysdate from dual"); console.log(result.rows[0]); } catch (err) { console.error(err); } finally { if (connection) { try { await connection.close(); } catch (err) { console.error(err); } } } } run();

Run It!

Let's run our Node.js program. You should see a date returned from the Database.

node select.js [ 2018-09-13T18:19:54.000Z ] Important Notes

As there currently isn't a service gateway to connect from Oracle Cloud Infrastructure to Autonomous Transaction Processing, any traffic between these two will count against your network quota.


In this blog post I've demonstrated how to run a Node.js app on an Oracle Linux instance in Oracle Cloud Infrastructure (OCI) and connect it to Autonomous Transaction Processing Database by installing all necessary software —including Oracle Instant Client— directly from yum servers within OCI itself. By offering direct access to essential Oracle software from within Oracle Cloud Infrastructure, without requiring manual steps to accept license terms, we've made it easier for developers to build Oracle-based applications on Oracle Cloud.



6th September 2018 |

Autonomous Database: Creating an Autonomous Transaction Processing Instance

In this post I’m going to demonstrate how quick and easy one can create an Autonomous Transaction Processing, short ATP, instance of Oracle’s Autonomous Database Cloud Services. Oracle’s ATP launched on the 7th of August 2018 and is the general purpose flavor of the Oracle Autonomous Database. My colleague SQLMaria (also known as Maria Colgan 😉 ) has already done a great job explaining the difference between the Autonomous Transaction Processing and the Autonomous Data Warehouse services. She has also written another post on what one can expect from Oracle Autonomous Transaction Processing. I highly recommend reading both her articles first for a better understanding of the offerings.

Last but not least, you can try ATP yourself today via the Oracle Free Cloud Trial.

Now let’s get started. Provisioning an ATP service is, as said above, quick and easy.


To create an instance you just have to follow these three simple steps:

  1. Log into the Oracle Cloud Console and choose "Autonomous Transaction Processing" from the menu.
  2. Click "Create Autonomous Transaction Processing"
  3. Specify the name, the amount of CPU and storage, the administrator password and hit "Create Autonomous Transaction Processing"

Creating an ATP instance

In order to create an ATP environment you first have to logon to the Oracle Cloud Console. From there, click on the top left menu and choose “Autonomous Transaction Processing“.


On the next screen you will see all your ATP databases, in my case none, because I haven’t created any yet. Hit the “Create Autonomous Transaction Processing” button.


A new window will open that asks you about the display and database name, the amount of CPUs and storage capacity, as well as the administrator password and the license to use.


The display name is what you will see in the cloud console once your database service is created. The database name is the name of the database itself that you will later connect to from your applications. You can use the same name for both or different ones. In my case I will use a different name for the database than for the service.

The minimum CPU and storage count is 1, which is what I’m going for. Don’t forget that scaling the CPUs and/or storage up and down is fully online with Oracle Autonomous Database and transparent to the application. So even if you don’t know yet exactly how many CPUs or TBs of storage you need, you can always change that later on which no outages!

Next you have to specify the password for the admin user.


The admin user is a database user with administrative privileges that allows you to create other users and perform various other tasks.

Last but not least, you have to choose which license model you want to use.


The choice is either bringing your own license, i.e. “My organization already owns Oracle Database software licenses“, sometimes also referred to as “BYOL” or “Bring Your Own License“, which means that you do already have some unused Oracle Database licenses that you would like to reuse for your Autonomous Transaction Processing instance. This is usually done if you want to migrate your on-premises databases into the cloud and want to leverage the fact that you have already bought Oracle Database licenses in the past.

The other option is to subscribe to new Oracle Database software licenses as part of the provisioning. This option is usually used if you want to have a new database cloud service that doesn’t replace an existing database.

Once you have made your choice, it’s time to hit the “Create Autonomous Transaction Processing“.

Your database is now being provisioned.


Once the state changes to Green – Available, your database is up and running.


Clicking on the name of the service will provide you with further details.


Congratulations, you have just created your first Autonomous Transaction Processing Database Cloud Service. Make sure you also check out the Autonomous Transaction Processing Documentation.

Originally published at on August 28, 2018.


15th August 2018 |

Introducing GraphPipe
Dead Simple Machine Learning Model Serving

There has been rapid progress in machine learning over the past few years. Today, you can grab one of a handful of frameworks, follow some online tutorials, and have a working machine learning model in a matter of hours. Unfortunately, when you are ready to deploy that model into production you still face several unique challenges.

First, there is no standard for model serving APIs, so you are likely stuck with whatever your framework gives you. This might be protocol buffers or custom JSON. Your business application will generally need a bespoke client just to talk to your deployed model. And it's even worse if you are using multiple frameworks. If you want to create ensembles of models from multiple frameworks, you'll have to write custom code to combine them.

Second, building your model server can be incredibly complicated. Deployment gets much less attention than training, so out-of-the-box solutions are few and far between. Try building a GPU version of TensorFlow-serving, for example. You better be prepared to bang your head against it for a few days.

Finally, many of the existing solutions don't focus on performance, so for certain use cases they fall short. Serving a bunch of tensor data from a complex model via a python-JSON API not going to cut it for performance-critical applications.

We created GraphPipe to solve these three challenges. It provides a standard, high-performance protocol for transmitting tensor data over the network, along with simple implementations of clients and servers that make deploying and querying machine learning models from any framework a breeze. GraphPipe's efficient servers can serve models built in TensorFlow, PyTorch, mxnet, CNTK, or caffe2. We are pleased to announce that GraphPipe is available on Oracle's GitHub. Documentation, examples, and other relevant content can be found at

The Business Case

In the enterprise, machine-learning models are often trained individually and deployed using bespoke techniques. This impacts an organizations’ ability to derive value from its machine learning efforts. If marketing wants to use a model produced by the finance group, they will have to write custom clients to interact with the model. If the model becomes popular sales wants to use it as well, the custom deployment may crack under the load.

It only gets worse when the models start appearing in customer-facing mobile and IoT applications. Many devices are not powerful enough to run models locally and must make a request to a remote service. This service must be efficient and stable while running models from varied machined learning frameworks.

A standard allows researchers to build the best possible models, using whatever tools they desire, and be sure that users can access their models' predictions without bespoke code. Models can be deployed across multiple servers and easily aggregated into larger ensembles using a common protocol. GraphPipe provides the tools that the business needs to derive value from its machine learning investments.

Implementation Details

GraphPipe is an efficient network protocol designed to simplify and standardize transmission of machine learning data between remote processes. Presently, no dominant standard exists for how tensor-like data should be transmitted between components in a deep learning architecture. As such it is common for developers to use protocols like JSON, which is extremely inefficient, or TensorFlow-serving's protocol buffers, which carries with it the baggage of TensorFlow, a large and complex piece of software. GraphPipe is designed to bring the efficiency of a binary, memory-mapped format while remaining simple and light on dependencies.

GraphPipe includes:

  • A set of flatbuffer definitions
  • Guidelines for serving models consistently according to the flatbuffer definitions
  • Examples for serving models from TensorFlow, ONNX, and caffe2
  • Client libraries for querying models served via GraphPipe

In essence, a GraphPipe request behaves like a TensorFlow-serving predict request, but using flatbuffers as the message format. Flatbuffers are similar to google protocol buffers, with the added benefit of avoiding a memory copy during the deserialization step. The flatbuffer definitions provide a request message that includes input tensors, input names and output names. A GraphPipe remote model accepts the request message and returns one tensor per requested output name. The remote model also must provide metadata about the types and shapes of the inputs and outputs that it supports.


First, we compare serialization and deserialization speed of float tensor data in python using a custom ujson API, protocol buffers using a TensorFlow-serving predict request, and a GraphPipe remote request. The request consists of about 19 million floating-point values (consisting of 128 224x224x3 images) and the response is approximately 3.2 million floating point values (consisting of 128 7x7x512 convolutional outputs). The units on the left are in seconds.

Graphpipe is especially performant on the deserialize side, because flatbuffers provide access to underlying data without a memory copy.

Second, we compare end-to-end throughput using a Python-JSON TensorFlow model server, TensorFlow-serving, and the GraphPipe-go TensorFlow model server. In each case the backend model is the same. Large requests are made to the server using 1 thread and then again with 5 threads. The units on the left are rows calculated by the model per second.

Note that this test uses the recommended parameters for building Tensorflow-serving. Although the recommended build parameters for TensorFlow-serving do not perform well, we were ultimately able to discover compilation parameters that allow it to perform on par with our GraphPipe implementation. In other words, an optimized TensorFlow-serving performs similarly to GraphPipe, although building TensorFlow-serving to perform optimally is not documented nor easy.

Where Do I Get it?

You can find plenty of documentation and examples at The GraphPipe flatbuffer spec can be found on Oracle's GitHub along with servers that implement the spec for Python and Go. We also provide clients for Python, Go, and Java (coming soon), as well as a plugin for TensorFlow that allows the inclusion of a remote model inside a local TensorFlow graph.


15th August 2018 |

Podcast: Developer Evolution: What's rockin’ roles in IT?

The good news is that the US Bureau of Labor Statistics predicts 24% growth in software developer jobs through 2026. That’s well above average. The outlook for Database administrators certainly isn’t bleak, but with projected job growth of 11% to 2026, that’s less than half the growth projected for developers. Job growth for System administrators, at 6% through 2016, is considered average by the BLS. So while the news is positive all around, developers certainly have an advantage. Each of these roles certainly has separate and distinct responsibilities. But why is the outlook so much better for developers, and what does this say about what’s happening in the IT ecosystem?

"More than ever," says Oracle Developer Champion Rolando Carrasco, "institutions, organizations, and governments are keen to generate a new crop of developers that can help them to to create something new." In today's business climate competition is tough, and high premium is placed on innovation. "But developers have a lot of tools,  a lot of abilities within reach, and the opportunity to make something that can make a competitive difference."

But the role of the developer is morphing into something new, according to Oracle ACE Director Martin Giffy D'Souza. "In the next couple years we're also going to see that  the typical developer is not going to be the traditional developer that went to school, or the script kitties that just got into the business. We're going see what is called the citizen developer. We're going to see a lot more people transition to that simply because it adds value to their job. Those people are starting to hit the limits of writing VBA macros in Excel and they want to write custom apps. I think that's what we're going to see more and more of, because we already know there's a developer job shortage."

But why is the job growth for developers outpacing that for DBAs and SysAdmins? "If you take it at very high level, devs produce things," Martin says. "They produce value. They produce products.  DBAs and IT people are maintainers. They’re both important, but the more products and solutions we can create," the more value to the business.

Oracle ACE Director Mark Rittman has spent the last couple of years working as a product manager in a start-up, building a tech platform. "I never saw a DBA there," he admits. "It was at the point that if I were to try to explain what a DBA was to people there, all of whom are uniformly half my age, they wouldn't know what I was talking about. That's because the platforms people use these days, within the Oracle ecosystem or Google or Amazon or whatever, it's all very much cloud, and it's all very much NoOPs, and it's very much the things that we used to spend ages worrying about,"

This frees developers to do what they do best. "There are far fewer people doing DBA work and SysAdmin work," Mark says. "That’s all now in the cloud. And that also means that developers can also develop now. I remember, as a BI developer working on projects, it was surprising how much of my time was spent just getting the system working in the first place, installing things, configuring things, and so on. Probably 75% of every project was just getting the thing to actually work."

Where some roles may vanish altogether, others will transform. DBAs have become data engineers or infrastructure engineers, according to Mark. "So there are engineers around and there are developers around," he observes, "but I think administrator is a role that, unless you work for one of the big cloud companies in one of those big data centers, is largely kind of managed away now."

Phil Wilkins, an Oracle ACE, has witnessed the changes. DBAs in particular, as well as network people focused on infrastructure, have been dramatically affected by cloud computing, and the ground is still shaking. "With the rise and growth in cloud adoption these days, you're going to see the low level, hard core technical skills that the DBAs used to bring being concentrated into the cloud providers, where you're taking a database as a service. They're optimizing the underlying infrastructure, making sure the database is running. But I'm just chucking data at it, so I don't care about whether the storage is running efficiently or not. The other thing is that although developers now get a get more freedom, and we've got NoSQL and things like that, we're getting more and more computing power, and it's accelerating at such a rate now that, where 10 years ago we used to have to really worry about the tuning and making sure the database was performant, we can now do a lot of that computing on an iPhone. So why are we worrying when we've got huge amounts of cloud and CPU to the bucketload?

These comments represent just a fraction of the conversation captured in this latest Oracle Developer Community Podcast, in which the panelists dive deep into the forces that are shaping and re-shaping roles, and discuss their own concerns about the trends and technologies that are driving that evolution. Listen!

The Panelists Rolando Carrasco

Rolando Carrasco
Oracle Developer Champion
Oracle ACE
Co-owner, Principal SOA Architect, S&P Solutions
Twitter LinkedIn

Martin Giffy D'Souza

Martin Giffy D'Souza
Oracle ACE Director
Director of Innovation, Insum Solutions
Twitter LinkedIn 

Mark Rittman

Mark Rittman
Oracle ACE Director
Chief Executive Officer, MJR Analytics
Twitter LinkedIn 

Phil Wilkins

Phil Wilkins
Oracle ACE
Senior Consultant, Capgemini
Twitter LinkedIn 5

Related Oracle Code One Sessions

The Future of Serverless is Now: Ci/CD for the Oracle Fn Project, by Rolando Carrasco and Leonardo Gonzalez Cruz [DEV5325]

Other Related Content

Podcast: Are Microservices and APIs Becoming SOA 2.0?

Vibrant and Growing: The Current State of API Management

Video: 2 Minute Integration with Oracle Integration Cloud Service

It's Always Time to Change

Coming Soon

The next program, coming on Sept 5, will feature a discussion of "DevOps to NoOps," featuring panelists Baruch Sadogursky, Davide Fiorentino, Bert Jan Schrijver, and others TBA. Stay tuned!


Never miss an episode! The Oracle Developer Community Podcast is available via:


6th August 2018 |

What's New in Oracle Developer Cloud Service - August 2018

Over the weekend we updated Oracle Developer Cloud Service - your cloud based DevOps and Agile platform - with a new release (18.3.3) adding some key new features that will improve the way you develop and release software on the Oracle Cloud. Here is a quick rundown of key new capabilities added this month.


A new top level section in Developer Cloud Service now allows you to define "Environments" - a collection of cloud services that you bundle together under one name. Once you have an environment defined, you'll be able to see the status of your environment on the home page of your project. You can for example define a development, test and production environments - and see the status of each one with a simple glance.

Environment View

This is the first step in a set of future features of DevCS that will help you manage software artifacts across environments in an easier way.

Project Templates

When you create a new project in DevCS you can base it on a template. Up until this release you were limited to templates created by Oracle, now you can define your own templates for your company.

Template can include default artifacts such as wiki pages, default git repositories, and even builds and deployment steps.

This is very helpful for companies who are aiming to standardize development across development teams, as well as for team who have repeating patterns of development.

Project Template

Wiki Enhancments

The wiki in DevCS is a very valuable mechanism for your team to share information, and we just added a bunch of enhancements that will make collaboration in your team even better.

You can now watch specific wiki pages or sections, which will notify you whenever someone updates those pages.

We also added support for commenting on wiki pages - helping you to conduct virtual discussion on their content.

Wiki tracking


These are just some of the new features in Developer Cloud Service. All of these features are part of the free functionality that Developer Cloud Service provides to Oracle Cloud customers. Take them for a spin and let us know what you think.

For information on additional new feature check out the What's New in Developer Cloud Service Documentation.

Got technical questions - ask them on our cloud customer connect community page.



6th August 2018 |

Auto-updatable, self-contained CLI with Java 11

(Originally published on Medium)


Over the course of the last 11 months, we have seen two major releases of Java — Java 9 and Java 10. Come September, we will get yet another release in the form of Java 11, all thanks to the new 6 month release train. Each new release introduces exciting features to assist the modern Java developer. Let’s take some of these features for a spin and build an auto-updatable, self-contained command line interface.

The minimum viable feature-set for our CLI is defined as follows:

  • Display the current bitcoin price index by calling the free coin desk API
  • Check for new updates and if available, auto update the CLI
  • Ship the CLI with a custom Java runtime image to make it self-contained

To follow along, you will need a copy of JDK 11 early-access build. You will also need the latest version (4.9 at time of writing) of gradle. Of course, you can use your preferred way of building Java applications. Though not required, familiarity with JPMS and JLink can be helpful since we are going to use the module system to build a custom runtime image.

Off we go

We begin by creating a class that provides the latest bitcoin price index. Internally, it reads a configuration file to get the URL of the coin desk REST API and builds an http client to retrieve the latest price. This class makes use of the new fluent HTTP client classes that are part of “” module.

var bpiRequest = HttpRequest.newBuilder() .uri(new URI(config.getProperty("bpiURL"))) .GET() .build(); var bpiApiClient = HttpClient.newHttpClient(); bpiApiClient .sendAsync(bpiRequest, HttpResponse.BodyHandlers.ofString()) .thenApply(response -> toJson(response)) .thenApply(bpiJson -> bpiJson.getJsonObject("usd").getString("rate"));

Per Java standards, this code is actually very concise. We used the new fluent builders to create a GET request, call the API, convert the response into JSON, and pull the current bitcoin price in USD currency.

In order to build a modular jar and set us up to use “jlink”, we need to add a “” file to specify the CLI’s dependencies on other modules.

module ud.bpi.cli { requires; requires; }

From the code snippet, we observe that our CLI module requires the http module shipped in Java 11 and an external JSON library.

Now, let’s turn our attention to implement an auto-updater class. This class should provide a couple of methods. One method to talk to a central repository and check for the availability of newer versions of the CLI and another method to download the latest version. The following snippet shows how easy it is to use the new HTTP client interfaces to download remote files.

CompletableFuture update(String downloadToFile) { try { HttpRequest request = HttpRequest.newBuilder() .uri(new URI("http://localhost:8080/")) .GET() .build(); return HttpClient.newHttpClient() .sendAsync(request, HttpResponse.BodyHandlers .ofFile(Paths.get(downloadToFile))) .thenApply(response -> { unzip(response.body()); return true; }); } catch (URISyntaxException ex) { return CompletableFuture.failedFuture(ex); } }

The new predefined HTTP body handlers in Java 11 can convert a response body into common high-level Java objects. We used the HttpResponse.BodyHandlers.ofFile() method to download a zip file that contains the latest version of our CLI.

Let’s put these classes together by using a launcher class. It provides an entry point to our CLI and implements the application flow. Right when the application starts, this class calls its launch() method that will check for new updates.

void launch() { var autoUpdater = new AutoUpdater(); try { if (autoUpdater.check().get()) { System.exit(autoUpdater.update().get() ? 100 : -1); } } catch (InterruptedException | ExecutionException ex) { throw new RuntimeException(ex); } }

As you can see, if a new version of the CLI is available, we download the new version and exit the JVM by passing in a custom exit code 100. A simple wrapper script will check for this exit code and rerun the CLI.

#!/bin/sh ... start EXIT_STATUS=$? if [ ${EXIT_STATUS} -eq 100 ]; then start fi

And finally, we will use “jlink” to create a runtime image that includes all the necessary pieces to execute our CLI. jlink is a new command line tool provided by Java that will look at the options passed to it to assemble and optimize a set of modules and their dependencies into a custom runtime image. In the process, it builds a custom JRE — thereby making our CLI self-contained.

jlink --module-path build/libs/:${JAVA_HOME}/jmods \ --add-modules ud.bpi.cli, \ --launcher bpi=ud.bpi.cli/ud.bpi.cli.Launcher \ --output images

Let’s look at the options that we passed to jlink:

  • “ module-path” tells jlink to look into the specified folders that contain java modules
  • “ add-modules” tells jlink which user-defined modules are to be included in the custom image
  • “launcher” is used to specify the name of the script that will be used to start our CLI and the full path to the class that contains the main method of the application
  • “output” is used to specify the folder name that holds the newly created self-contained custom image

When we run our first version of the CLI and there are no updates available, the CLI prints something like this:

Say we release a new version (2) of the CLI and push it to the central repo. Now, when you rerun the CLI, you will see something like this:

Voila! The application sees that a new version is available and auto-updates itself. It then restarts the CLI. As you can see, the new version adds an up/down arrow indicator to let the user know how well the bitcoin price index is doing.

Head over to GitHub to grab the source code and experiment with it.


19th July 2018 |

Oracle Load Balancer Classic configuration with Terraform

(Originally published on Medium)

This article provides an introduction to using the Load Balancer resources to provision and configure an Oracle Cloud Infrastructure Load Balancer Classic instance using Terraform

When using the Load Balancer Classic resources with the opc Terraform Provider the  lbaas_endpoint  attribute must be set in the provider configuration.

provider "opc" { version = "~> 1.2" user = "${var.user}" password = "${var.password}" identity_domain = "${var.compute_service_id}" endpoint = "${var.compute_endpoint}" lbaas_endpoint = "" }

First we create the main Load Balancer instance resource. The Server Pool, Listener and Policy resources will be created as child resources associated to this instance.

resource "opc_lbaas_load_balancer" "lb1" { name = "examplelb1" region = "uscom-central-1" description = "My Example Load Balancer" scheme = "INTERNET_FACING" permitted_methods = ["GET", "HEAD", "POST"] ip_network = "/Compute-${var.domain}/${var.user}/ipnet1" }

To define the set of servers the load balancer will be directing traffic to we create a Server Pool, sometimes referred to as an origin server pool. Each server is defined by the combination of the target IP address, or hostname, and port. For the brevity of this example we’ll assume we already have a couple instances on an existing IP Network with a web service running on port  8080 

resource "opc_lbaas_server_pool" "serverpool1" { load_balancer = "${}" name = "serverpool1" servers = ["", ""] vnic_set = "/Compute-${var.domain}/${var.user}/vnicset1" }

The Listener resource defines what incoming traffic the Load Balancer will direct to a specific server pool. Multiple Server Pools and Listeners can be defined for a single Load Balancer instance. For now we’ll assume all the traffic is HTTP, both to the load balancer and between the load balancer and the server pool. We’ll look at securing traffic with HTTPS later. In this example the load balancer is managing inbound requests for a site  and directing them to the server pool we defined above.

resource "opc_lbaas_listener" "listener1" { load_balancer = "${}" name = "http-listener" balancer_protocol = "HTTP" port = 80 virtual_hosts = [""] server_protocol = "HTTP" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.load_balancing_mechanism_policy.uri}", ] }

Policies are used to define how the Listener processes the incoming traffic. In the Listener definition we are referencing a Load Balancing Mechanism Policy to set how the load balancer allocates the traffic across the available servers in the server pool. Additional policy type could also be defined to control session affinity of

resource "opc_lbaas_policy" "load_balancing_mechanism_policy" { load_balancer = "${}" name = "roundrobin" load_balancing_mechanism_policy { load_balancing_mechanism = "round_robin" } }

With that, our first basic Load Balancer configuration is complete. Well almost. The last step is to configure the DNS CNAME record to point the source domain name (e.g. ) to the canonical host name of load balancer instance. The exact steps to do this will be dependent on your DNS provider. To get the  canonical_host_name add the following output. output "canonical_host_name" { value = "${opc_lbaas_load_balancer.lb1.canonical_host_name}" }

Helpful Hint: if you are just creating the load balancer for testing and you don’t have access to a DNS name you can redirect, a workaround is to set the  virtual host  in the listener configuration to the load balancers canonical host name, you can then use the canonical host name directly for the inbound service URL, e.g.

resource "opc_lbaas_listener" "listener1" { ... virtual_hosts = [ "${opc_lbaas_load_balancer.lb1.canonical_host_name}" ] ... } Configuring the Load Balancer for HTTPS

There are two separate aspects to configuring the Load Balancer for HTTPS traffic, the first is to enable inbound HTTPS requests to the Load Balancer, often referred to as SSL or TLS termination or offloading. The second is the use of HTTPS for traffic between the Load Balancer and the servers in the origin server pool.

HTTPS SSL/TLS Termination

To configure the Load Balancer listener to accept inbound HTTPS requests for encrypted traffic between the client and the Load Balancer, create a Server Certificate providing the PEM encoded certificate and private key, and the concatenated set of PEM encoded certificates for the CA certification chain.

resource "opc_lbaas_certificate" "cert1" { name = "server-cert" type = "SERVER" private_key = "${var.private_key_pem}" certificate_body = "${var.cert_pem}" certificate_chain = "${var.ca_cert_pem}" }

Now update the existing, or create a new listener for HTTPS

resource "opc_lbaas_listener" "listener2" { load_balancer = "${}" name = "https-listener" balancer_protocol = "HTTPS" port = 443 certificates = ["${opc_lbaas_certificate.cert1.uri}"] virtual_hosts = [""] server_protocol = "HTTP" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.load_balancing_mechanism_policy.uri}", ] }

Note that the server pool protocol is still HTTP, in this configuration traffic is only encrypted between the client and the load balancer.

HTTP to HTTPS redirect

A common pattern required for many web applications is to ensure that any initial incoming requests over HTTP are redirected to HTTPS for secure site communication. To do this we can we can update the original HTTP listeners we created above with a new redirect policy

resource "opc_lbaas_policy" "redirect_policy" { load_balancer = "${}" name = "example_redirect_policy" redirect_policy { redirect_uri = "https://${var.dns_name}" response_code = 301 } } resource "opc_lbaas_listener" "listener1" { load_balancer = "${}" name = "http-listener" balancer_protocol = "HTTP" port = 80 virtual_hosts = [""] server_protocol = "HTTP" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.redirect_policy.uri}", ] } HTTPS between Load Balancer and Server Pool

HTTPS between the Load Balancer and Server Pool should be used if the server pool is accessed over the Public Internet, and can also be used for extra security when accessing servers within the Oracle Cloud Infrastructure over the private IP Network.

This configuration assumes the backend servers are already configured to server their content over HTTPS.

To configure the Load Balancer to communicate securely with the backend servers create a Trusted Certificate, providing the PEM encoded Certificate and CA authority certificate chain for the backend servers.

resource "opc_lbaas_certificate" "cert2" { name = "trusted-cert" type = "TRUSTED" certificate_body = "${var.cert_pem}" certificate_chain = "${var.ca_cert_pem}" }

Next create a Trusted Certificate Policy referencing the Trusted Certificate

resource "opc_lbaas_policy" "trusted_certificate_policy" { load_balancer = "${}" name = "example_trusted_certificate_policy" trusted_certificate_policy { trusted_certificate = "${opc_lbaas_certificate.cert2.uri}" } }

And finally update the listeners server pool configuration to HTTPS, adding the trusted certificate policy

resource "opc_lbaas_listener" "listener2" { load_balancer = "${}" name = "https-listener" balancer_protocol = "HTTPS" port = 443 certificates = ["${opc_lbaas_certificate.cert1.uri}"] virtual_hosts = [""] server_protocol = "HTTPS" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.load_balancing_mechanism_policy.uri}", "${opc_lbaas_policy.trusted_certificate_policy.uri} ] } More Information


18th July 2018 |

A Quick Look At What's New In Oracle JET v5.1.0

On June 18th, the v5.1.0 release of Oracle JET was made available. It was the 25th consecutive on-schedule release for Oracle JET. Details on the release schedule are provided here in the FAQ.

As indicated by the release number, v5.1.0 is a minor release, aimed at tweaking and consolidating features throughout the toolkit. As in other recent releases, new features have been added to support development of composite components, following the Composite Component Architecture (CCA). For details, see the entry on the new Template Slots in Duncan Mills's blog. Also, take note of the new design time metadata, as described in the release notes

Aside from the work done in the CCA area, the key new features and enhancements to be aware of in the release are listed below, sorted alphabetically:

Component Enhancement Description oj-chart New "data" attribute. Introduces new attributes, slots, and custom elements. oj-film-strip New "looping" attribute. Specifies filmstrip navigation behavior, bounded ("off) or looping ("page"). oj-form-layout Enhanced content flexibility. Removes restrictions on the types of children allowed in the "oj-form-layout" component. oj-gantt New "dnd" attribute and "ojMove" event.  Provides new support for moving tasks via drag and drop. oj-label-value New component. Provides enhanced layout flexibility for the "oj-form-layout" component. oj-list-view Enhanced "itemTemplate" slot. Supports including the <LI> element in the template. oj-swipe-actions New component. Provides a declarative way to add swipe-to-reveal functionality to items in the "oj-list-view" component.

For all the details on the items above, see the release notes.

Note: Be aware that in Oracle JET 7.0.0, support for Yeoman and Grunt will be removed from generator-oraclejet and ojet-cli. As a consequence, the ojet-cli will be the only way to use the Oracle JET tooling, e.g., to create new Oracle JET projects from that point on. Therefore, if you haven't transferred from using Yeoman and Grunt to ojet-cli yet, e.g., to command line calls such as "ojet create", take some time to move in that direction before the 7.0.0 release.

As always, your comments and constructive feedback are welcome. If you have questions, or comments, please engage with the Oracle JET Community in the Discussion Forums and also follow @OracleJET on Twitter.

For organizations using Oracle JET in production, you're invited to be highlighted on the Oracle JET site, with the latest addition being a brand new Customer Success Story by Cagemini.

On behalf of the entire Oracle JET development team: "Happy coding!"


18th July 2018 |

Vibrant and Growing: The Current State of API Management

"Vibrant and growing all the time!" That's how Andrew Bell, Oracle PaaS API Management Architect at Capgemini, describes the current state of API management. "APIs are the doors to organizations, the means by which organizations connect to one another, connect their processes to one another, and streamline those processes to meet customer needs. The API environment is growing rapidly as we speak," Bell says.

"API management today is quite crucial," says Bell's Capgemini colleague Sander Rensen, an Oracle PaaS lead and architect, "especially for clients who want to go on a journey of a digital transformation. For our clients, the ability to quickly find APIs and subscribe to them is a very crucial part of digital transformation.

"It's not just the public-facing view of APIs," observes Oracle ACE Phil Wilkins, a senior Capgemini consultant specializing in iPaaS. "People are realizing that APIs are an easier, simpler way to do internal decoupling. If I expose my back-end system in a particular way to another part of the organization — the same organization — I can then mask from you how I'm doing transformation or innovation or just trying to keep alive a legacy system while we try and improve our situation," Wilkins explains. "I think that was one of the original aspirations of WSDL and technologies like that, but we ended up getting too fine-grained and tying WSDLs to end products. Then the moment the product changed that WSDL changed and you broke the downstream connections."

Luis Weir, CTO of Capgemini's Oracle delivery unit and an Oracle Developer Champion and ACE Director, is just as enthusiastic about the state of API management, but see's a somewhat rocky road ahead for some organizations. "APIs are one thing, but the management of those APIs is something entirely different," Weir explains

"API management is something that we're doing quite heavily, but I don't think all organizations have actually realized the importance of the full lifecycle management of the APIs. Sometimes people think of API management as just an API gateway. That’s an important capability, but there is far more to it,"

Weir wonders if organizations understand what it means to manage an API throughout its entire lifecycle.

Bell, Rensen, Wilkins, and Weir are the authors of Implementing Oracle API Platform Cloud Service, now available from Packt Publishing, and as you'll hear in this podcast, they bring considerable insight and expertise to this discussion of what's happening in API management. The conversation goes beyond the current state of API management to delve into architectural implications, API design, and how working in SOA may have left you with some bad habits. Listen!

This program was recorded on June 27, 2018.

The Panelists Andrew Bell Andrew Bell
Oracle PaaS API Management Architect, Capgemini
Twitter  LinkedIn  Sander Rensen Sander Rensen
Oracle PaaS Lead and Architect, Capgemini
Twitter  LinkedIn  Luis Weir Luis Weir
CTO, Oracle DU, Capgemini
Oracle Developer Champion
Oracle ACE Director
Twitter LinkedIn Phil Wilkins
Senior Consultant specializing in iPaaS
Oracle ACE
Twitter LinkedIn  Additional Resources Related Oracle Code One Sessions   Coming Soon

How has your role as a developer, DBA, or Sysadmin changed? Our next program will focus on the evolution of IT roles and the trends and technologies that are driving the changes.


Never miss an episode! The Oracle Developer Community Podcast is available via:


12th July 2018 |

Keep Calm and Code On: Four Ways an Enterprise Blockchain Platform Can Improve Developer ...

A guest post by Sarabjeet (Jay) Chugh, Sr. Director Product Marketing, Oracle Cloud Platform


You just got a cool new Blockchain project for a client. As you head back to the office, you start to map out the project plan in your mind. Can you meet all of your client’s requirements in time? You're not alone in this dilemma.

You attend a blockchain conference the next day and get inspired by engaging talks, meet fellow developers working on similar projects. A lunchtime chat with a new friend turns into a lengthy conversation about getting started with Blockchain.

Now you’re bursting with new ideas and ready to get started with your hot new Blockchain coding project. Right?

Well almost…

You go back to your desk and contemplate a plan of action to develop your smart contract or distributed application, thinking through the steps, including ideation, analysis, prototype, coding, and finally building the client-facing application.


It is then that the reality sets in. You begin thinking beyond proof-of-concept to the production phase that will require additional things that you will need to design for and build into your solution. Additional things such as:

These things may delay or even prevent you from getting started with building the solution. Ask yourself the questions such as:

  • Should I spend time trying to fulfill dependencies of open-source software such as Hyperledger Fabric on my own to start using it to code something meaningful?
  • Do I spend time building integrations of diverse systems of record with Blockchain?
  • Do I figure out how to assemble components such as Identity management, compute infrastructure, storage, management & monitoring systems to Blockchain?
  • How do I integrate my familiar development tools & CI/CD platform without learning new tools?
  • And finally, ask yourself, Is it the best use of your time to figure out scaling, security, disaster recovery, point in time recovery of distributed ledger, and the “illities” like reliability, availability, and scalability?

If the answer to one or more of these is a resounding no, you are not alone. Focusing on the above aspects, though important, will take time away from doing the actual work to meet your client’s needs in a timely manner, which can definitely be a source of frustration.

But do not despair.

You need to read on about how an enterprise Blockchain platform such as the one from Oracle can make your life simpler. Imagine productivity savings multiplied hundreds of thousands of times across critical enterprise blockchain applications and chaincode.

What is an Enterprise Blockchain Platform?

The very term “enterprise”  typically signals a “large-company, expensive thing” in the hearts and minds of developers. Not so in this case, as it may be more cost effective than spending your expensive developer hours to build, manage, and maintain blockchain infrastructure and its dependencies on your own.

As the chart below shows, the top two Blockchain technologies used in proofs of concept have been Ethereum and Hyperledger.


Ethereum has been a platform of choice among the ICO hype for public blockchain use. However, it has relatively lower performance, is slower and less mature compared to Hyperledger. It also uses a less secure programming model based on a primitive language called Solidity, which is prone to re-entrant attacks that has led to prominent hacks like the DOA attack that lost $50M recently.  

Hyperledger Fabric, on the other hand, wins out in terms of maturity, stability, performance, and is a good choice for enterprise use cases involving the use of permissioned blockchains. In addition, capabilities such as the ones listed in Red have been added by vendors such as Oracle that make it simpler to adopt and use and yet retain the open source compatibility.

Let’s look at how enterprise Blockchain platform, such as the one Oracle has built that is based on open-source Hyperledger Fabric can help boost developer productivity.

How an Enterprise Blockchain Platform Drives Developer Productivity

Enterprise blockchain platforms provide four key benefits that drive greater developer productivity:

Performance at Scale

  • Faster consensus with Hyperledger Fabric
  • Faster world state DB - record level locking for concurrency and parallelization of updates to world state DB
  • Parallel execution across channels, smart contracts
  • Parallelized validation for commit

Operations Console with Web UI

  • Dynamic Configuration – Nodes, Channels
  • Chaincode Lifecycle – Install, Instantiate, Invoke, Upgrade
  • Adding Organizations
  • Monitoring dashboards
  • Ledger browser
  • Log access for troubleshooting

Resilience and Availability

  • Highly Available configuration with replicated VMs
  • Autonomous Monitoring & Recovery
  • Embedded backup of configuration changes and new blocks
  • Zero-downtime patching

Enterprise Development and Integration

  • Offline development support and tooling
  • DevOps CI/CD integration for chaincode deployment, and lifecycle management
  • SQL rich queries, which enable writing fewer lines of code, fewer lines to debug
  • REST API based integration with SaaS, custom apps, systems of record
  • Node.js, GO, Java client SDKs
  • Plug-and-Play integration adapters in Oracle’s Integration Cloud

Developers can experience orders of magnitude of productivity gains with pre-assembled, managed, enterprise-grade, and integrated blockchain platform as compared assembling it on their own.


Oracle offers a pre-assembled, open, enterprise-grade blockchain platform, which provides plug-and-play integrations with systems of records and applications and autonomous AI-driven self-driving, self-repairing, and self-securing capabilities to streamline operations and blockchain functionality. The platform is built with Oracle’s years of experience serving enterprise’s most stringent use cases and is backed by expertise of partners trained in Oracle blockchain. The platform rids developers of the hassles of assembling, integrating, or even worrying about performance, resilience, and manageability that greatly improves productivity.

If you’d like to learn more, Register to attend an upcoming webcast (July 16, 9 am PST/12 pm EST). And if your ready to dive right in you can sign up for $300 of free credits good for up to 3500 hours of Oracle Autonomous Blockchain Cloud Service usage.


5th July 2018 |

Build and Deploy Node.js Microservice on Docker using Oracle Developer Cloud

This is the first blog in the series to come, which will help you understand, how you can build a NodeJS REST microservice application Docker image and push it to DockerHub using Oracle Developer Cloud Service. The next blog in the series would focus on deployment of the container we build here to deploy on Oracle Kubernetes Engine on Oracle Cloud infrastructure.

You can read about the overview of the Docker functionality in this blog.

Technology Stack Used

Developer Cloud Service - DevOps Platform

Node.js Version 6 – For microservice development.

Docker – For Build

Docker Hub – Container repository


Setting up the Environment:

Setting up Docker Hub Account:

You should create an account on Keep the credentials handy for use in the build configuration section of the blog.

Setting up Developer Cloud Git Repository:

Now login into your Oracle Developer Cloud Service project. And create a Git repository as shown below. You can give a name of your choice to the Git repository. For the purpose of this blog, I am calling it NodeJSDocker. You can copy the Git repository URL and keep it handy for future use. 

Setting up Build VM in Developer Cloud:

Now we have to create a VM Template and VM with the Docker software bundle for the execution of the build.

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button.

On creation of the template click on “Configure Software” button.

Select Docker from the list of software bundles available for configuration and click on the + sign to add it to the template. Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM(s) you want to create and select the VM Template you just created, which would be “DockerTemplate” for our blog.


Pushing Scripts to Git Repository on Oracle Developer Cloud:

Command_prompt:> cd <path to the NodeJS folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Below screen shots are for your reference.


Below is the folder structure description for the code that I have in the Git Repository on Oracle Developer Cloud Service.

Code in the Git Repository:

You will need to push the below 3 files in the Developer Cloud hosted Git repository which we have created.


This is the main Node JavaScript code snippet which contains two simple methods, first one is to show the message and second one /add is for adding two numbers. The application listens at port 80. 

var express = require("express"); var bodyParser = require("body-parser"); var app = express(); app.use(bodyParser.urlencoded()); app.use(bodyParser.json()); var router = express.Router(); router.get('/',function(req,res){   res.json({"error" : false, "message" : "Hello Abhinav!"}); });'/add',function(req,res){   res.json({"error" : false, "message" : "success", "data" : req.body.num1 + req.body.num2}); }); app.use('/',router); app.listen(80,function(){   console.log("Listening at PORT 80"); })


In this JSON code snippet we define the Node.js module dependencies. We also define the start file, which is Main.js for our project and the Name of the application.

{   "name": "NodeJSMicro",   "version": "0.0.1",   "scripts": {     "start": "node Main.js"   },   "dependencies": {     "body-parser": "^1.13.2",     "express": "^4.13.1"     } }


This file will contains the commands to be executed to build the Docker container with the Node.js code. It starts by getting the Node.js version 6 Docker image, then adds the two files Main.js and package.json cloned from the Git repository. Run the npm install to download the dependencies in package.json file. Expose port 80 for Docker container. And finally start the application to listen on port 80.


FROM node:6 ADD Main.js ./ ADD package.json ./ RUN npm install EXPOSE 80 CMD [ "npm", "start" ]

Build Configuration:

Click on the “+ New Job” button and in the dialog which pops up, give the build job a name of your choice(for the purpose of this blog I have given this as “NodeJSMicroDockerBuild”) and then select the build template (DockerTemplate) from the dropdown, that we had created earlier in the blog. 

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository we created earlier in the blog, which is NodeJSDocker and the master branch to which we have pushed the code. You may select the checkbox to configure automatic build trigger on SCM commits.

Now from the Builders tab, select Docker Builder -> Docker Login. In the Docker login form you can leave the Registry host empty as we will be using Docker Hub which is the default Docker registry for Developer Cloud Docker Builder. You will have to provide the Docker Hub account username and password in the respective fields of the login form.

In the Builders tab, select Docker Builder -> Docker Build from the Add Builder dropdown. You can leave the Registry host empty as we are going to use Docker Hub which is the default registry. Now, you just need to give the Image name in the form that gets added and you are all done with the Build Job configuration. Click on Save to save the build job configuration.

Note: Image name should be in the format <Docker Hub user name>/<Image Name>

For this blog we can give the image name as - nodejsmicro

Then add Docker Push by selecting Docker Builder -> Docker Push from the Builders tab.Here you just need to mention the Image name, same as you have done in the Docker Build form to push the Docker Image build to the Docker Registry, which in this case is Docker Hub.

Once you execute the build, you will be able to see the build in the build queue.

Once the build gets executed the Docker Image that gets build is pushed to the Docker Registry which is Docker Hub for our blog. You can login into your Docker Hub account to see the Docker repository being created and the image being pushed to it, as seen in the screen shot below.

Now you can pull this image anywhere, then create and run the container, you will have your Node.js microservice code up and running.


You can go ahead and try many other Docker commands both using the out of the box Docker Builder functionality and also alternatively using the Shell Builder to run your Docker commands.

In the next blog, of the series, we will deploy this Node.js microservice container on a Kubernetes cluster in Oracle Kubernetes Engine.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle




3rd July 2018 |

Lessons From Alpha Zero (part 5): Performance Optimization

Photo by Mathew Schwartz on Unsplash

(Originally published on Medium)

This is the Fifth installment in our series on lessons learned from implementing AlphaZero. Check out Part 1, Part 2, Part 3, and Part4.

In this post, we review aspects of our AlphaZero implementation that allowed us to dramatically improve the speed of game generation and training.


The task of implementing AlphaZero is daunting, not just because the algorithm itself is intricate, but also due to the massive resources the authors employed to do their research: 5000 TPUs were used over the course of many hours to train their algorithm, and that is presumably after a tremendous amount of time was spent determining the best parameters to allow it to train that quickly.

By choosing Connect Four as our first game, we hoped to make a solid implementation of AlphaZero while utilizing more modest resources. But soon after starting, we realized that even a simple game like Connect Four could require significant resources to train: in our initial implementation, training would have taken weeks on a single gpu-enabled computer.

Fortunately, we were able to make a number of improvements that made our training cycle time shrink from weeks to about a day. In this post I’ll go over some of our most impactful changes.

  The Bottleneck

Before diving into some of the tweaks we made to reduce AZ training time, let’s describe our training cycle. Although the authors of AlphaZero used a continuous and asynchronous process to perform model training and updates, for our experiments we used the following three stage synchronous process, which we chose for its simplicity and debugability:

While (my model is not good enough):

  1. Generate Games: every model cycle, using the most recent model, game play agents generate 7168 games, which equates to about 140–220K game positions.
  2. Train a New Model: based on a windowing algorithm, we sample from historical data and train an improved neural network.
  3. Deploy the New Model: we now take our new model, transform it into a deployable format, and push it into our cloud for the next cycle of training

Far and away, the biggest bottleneck of this process is game generation, which was taking more than an hour per cycle when we first got started. Because of this, minimizing game generation time became the focus of our attention.

  Model Size

Alpha Zero is very inference heavy during self-play. In fact, during one of our typcal game generation cycles, MCTS requires over 120 Million position evaluations. Depending on the size of your model, this can translate to siginificant GPU time.

In the original implementation of AlphaZero, the authors used an architecture where the bulk of computation was performed in 20 residual layers each with 256 filters. This amounts to a model in excess of 90 megabytes, which seemed overkill for Connect Four. Also, using a model of that size was impractical given our initially limited GPU resources.

Instead, we started with a very small model, using just 5 layers and 64 filters, just to see if we could make our implementation learn anything at all. As we continued to optimize our pipeline and improve our results, we were able to bump our model size to 20X128 while still maintaining a reasonable game generation speed on our hardware.

  Distributed Inference

From the get-go, we knew that we would need more than one GPU in order to achieve the training cycle time that we were seeking, so we created software that allowed our Connect 4 game agent to perform remote inference to evaluate positions. This allowed us to scale GPU-heavy inference resources separately from game play resources, which need only CPU.

  Parallel Game Generation

GPU resources are expensive, so we wanted to make sure that we were saturating them as much as possible during playouts. This turned out to be trickier than we imagined.

One of the first optimizations we put in place was to run many games on parallel threads from the same process. Perhaps the largest direct benefit of this, is that it allowed us to cache position evaluations, which could be shared amongst different threads. This cut the number of requests getting sent to our remote inference server by more than a factor of 2:

Caching was a huge win, but we still wanted to deal with the remaining uncached requests in an efficient manner. To minimize network latency and best leverage GPU parallelization, we combined inference requests from different worker threads into a bucket before sending them to our inference service. The downside to this is that if a bucket was not promptly filled, any calling thread would be stuck waiting until the bucket’s timeout expired. Under this scheme, choosing an appropriate inference bucket size and timeout value was very important.

We found that bucket fill rate varied throughout the course of a game generation batch, mostly because some games would finish sooner than others, leaving behind fewer and fewer threads to fill the bucket. This caused the final games of a batch to take a long time to complete, all while GPU utilization dwindled to zero. We needed a better way to keep our buckets filled.

  Parallel MCTS

To help with our unfilled bucket problem, we implemented Parallel MCTS, which was discussed in the AZ paper. Initially we had punted on this detail, as it seemed mostly important for competitive one-on-one game play, where parallel game play is not applicable. After running into the issues mentioned previously, we decided to give it a try.

The idea behind Parallel MCTS is to allow multiple threads to take on the work of accumulting tree statistics. While this sounds simple, the naiive approach suffers from a basic problem: if N threads all start at the same time and choose a path based on the current tree statistics, they will all choose exactly the same path, thus crippling MCTS’ exploration component.

To counteract this, AlphaZero uses the concept of Virtual Loss, an algorithm that temporarily adds a game loss to any node that is traversed during a simulation. A lock is used to prevent multiple threads from simultaneously modifying a node’s simulation and virtual loss statistics. After a node is visited and a virtual loss is applied, when the next thread visits the same node, it will be discouraged from following the same path. Once a thread reaches a terminal point and backs up its result, this virtual loss is removed, restoring the true statistics from the simulation.

With virtual loss in place, we were finally able to achieve >95% GPU utilization during most of our game generation cycle, which was a sign that we were approaching the real limits of our hardware setup.

Technically, virtual loss adds some degree of exploration to game playouts, as it forces move selection down paths that MCTS may not naturally be inclined to visit, but we never measured any detrimental (or beneficial) effect due to its use.


Though it was not necessary to use a model quite as large as that described in the AlphaZero paper, we saw better learning from larger models, and so wanted to use the biggest one possible. To help with this, we tried TensorRT, which is a technology created by Nvidia to optimize the performance of model inference.

It is easy to convert an existing Tensorflow/Keras model to TensorRT using just a few scripts. Unfortunately, at the time we were working on this, there was no released TensorRT remote serving component, so we wrote our own.

With TensorRT’s default configuration, we noticed a small increase in inference throughput (~11%). We were pleased by this modest improvement, but were hopeful to see an even larger performance increase by using TensorRT’s INT8 mode. INT8 mode required a bit more effort to get going, since when using INT8 you must first generate a calibration file to tell the inference engine what scale factors to apply to your layer activations when using 8-bit approximated math. This calibration is done by feeding a sample of your data into Nvidia’s calibration library.

Because we observed some variation in the quality of calibration runs, we would attempt calibration against 3 different sets of sample data, and then validate the resulting configuraton against hold-out data. Of the three calibration attempts, we chose the one with the lowest validation error.

Once our INT8 implementation was in place, we saw an almost 4X increase in inference throughput vs. stock libtensorflow, which allowed us to use larger models than would have otherwise been feasible.

One downside of using INT8 is that it can be lossy and imprecise in certain situations. While we didn’t observe serious precision issues during the early parts of training, as learning progressed we would observe the quality of inference start to degrade, particularly on our value output. This initially led us to use INT8 only during the very early stages of training.

Serendipitously, we were able to virtually eliminate our INT8 precision problem when we began experimenting with increasing the number of convolutional filters in our head networks, an idea we got from Leela Chess. Below is a chart of our value output’s mean average error with 32 filters in the value head, vs. the AZ default of 1:

We theorize that adding additional cardinality to these layers reduces the variance in the activations, which makes the model easier to accurately quantize. These days, we always perfom our game generation with INT8 enabled and see no ill effects even towards the end of AZ training.


By using all of these approaches, we were finally able to train a decent-sized model with high GPU utilization and good cycle time. It was initially looking like it would take weeks to perform a full train, but now we could train a decent model in less than a day. This was great, but it turned out we were just getting started — in the next article we’ll talk about how we tuned AlphaZero itself to get even better learning speed.

Part 6 is now out.

Thanks to Vish (Ishaya) Abrams and Aditya Prasad.


22nd June 2018 |

Arrgs. My Bot Doesn't Understand Me! Why Intent Resolutions Sometimes Appear to Be Misbehaving

Article by Grant Ronald, June 2018

One of the most common questions that gets asked when someone starts building a real bot is “Why am I getting strange intent resolutions”. For example, someone tests the bot with random key presses like “slkejfhlskjefhksljefh” and finds an 80% resolution for “CheckMyBalance”. The first reaction is to blame the intent resolution within the product. However, the reality is that you’ve not trained it to know any better. This short article gives a high level conceptual explanation of how model do and don’t work.


Related Content

TechExchange - First Step in Training Your Bot


22nd June 2018 |

A Practical Guide to Building Multi-Language Chatbots with the Oracle Bot Platform

Article by Frank Nimphius, Marcelo Jabali - June 2018

Chatbot support for multiple languages is a worldwide requirement. Almost every country has the need for supporting foreign languages, be it to support immigrants, refugees, tourists, or even employees crossing borders on a daily basis for their jobs.

According to the Linguistic Society of America1, as of 2009, 6,909 distinct languages were classified, a number that since then has been grown. Although no bot needs to support all languages, you can tell that for developers building multi-language bots, understanding natural language in multiple languages is a challenge, especially if the developer does not speak all of the languages he or she needs to implement support for.

This article explores Oracle's approach to multi language support in chatbots. It explains the tooling and practices for you to use and follow to build bots that understand and "speak" foreign languages.

Read the full article.


Related Content

TechExchange: A Simple Guide and Solution to Using Resource Bundles in Custom Components 

TechExchange - Custom Component Development in OMCe – Getting Up and Running Immediately

TechExchange - First Step in Training Your Bot