Oracle Blogs | Oracle Developers Blog


17th January 2018 |

Podcast: Jfokus Panel: Building a New World Out of Bits

Our first program for 2018 brings together a panel of experts whose specialties cover a broad spectrum, including Big Data, security, open source, agile, domain driven design, Pattern-Oriented Software Architecture, Internet of Things, and more. The thread that connects these five people is that they are part of the small army of experts that will be presenting at the 2018 Jfokus Developers Conference, February 5-7, 2018 in Stockholm, Sweden.

This program was recorded on January 10, 2018

The Panelists

(in alphabetical order)

Jesse Anderson

Jesse Anderson (@jessetanderson)
Data Engineer, Creative Engineer, Managing Director, Big Data Institute
Reno, Nevada

    Suggested Resources

Benjamin Cabe

Benjamin Cabé (@kartben)
IoT Program Manager, Evangelist, Eclipse Foundation
Toulouse, France

   Suggested Resources

  • Article: Monetizing IoT Data using IOTA
  • White Paper: The Three Software Stacks Required for IoT Architectures
    A collaboration of the Eclipse IoT Working Group
Kevlin Henney

Kevlin Henney (@KevlinHenney)
Consultant, programmer, speaker, trainer, writer, owner, Curbralan
Bristol, UK

   Suggested Resources

Siren Hofvander

Siren Hofvander (@SecurityPony)
Chief Security Officer with Min Doktor
Malmö, Sweden

Suggested Resources

Dan Bergh Johnsson

Dan Bergh Johnsson (@danbjson)
Agile aficionado, Domain Driven Design enthusiast, code quality craftsman, Omegapoint, Stockholm, Sweden

Suggested Resources

Additional Resources Coming Soon
  • Women in Technology
    With Heli Helskyaho, Michelle Malcher, Kellyn Pot'Vin-Gorman, and Laura Ramsey
  • DevOps: Can This Marriage be Saved
    With Nicole Forsgen, Leonid Igolnik, Alaina Prokharchyk, Baruch Sadogursky, Shay Shmeltzer, Kelly Shortridge
  • Combating Complexity
    With Adam Bien, Lucas Jelllema, Chris Newcombe, and Chris Richardson

Never miss an episode! The Oracle Developer Community Podcast is available via:


16th January 2018 |

The Best Way to Get Help with Your Oracle Database Questions

One of the best things about the Oracle Developer Community is the easy access to expert help and ideas. To add to the expert content, Oracle is adding a new service for developers called Ask TOM Office Hours.  Chris Saxon, Oracle SQL Developer Advocate and SQL expert tells all about it:

Aaaaargh! Any more of this and I was ready to throw my computer out of the window. I was stuck. I was editing a video for The Magic of SQL, trying to create some blended split-screen effects. I was sure it was possible. I just didn’t know how. Searches turned up nothing. So I turned to forums for help.

But, instead of answers, all I was getting was requests for extra details. Three days in and I was still no closer to achieving the desired effect. So I gave up and called a colleague. After a couple of minutes chatting, they were able to point me to a solution.

Progress at last!

It’s a drawback that plagues technical forums. A simple request for help can turn into a prolonged back-and-forth exchange of information.

“Which version are you using?”

“What does your code look like?”

“Have you have you set the im_not_an_idiot parameter?”

They do want to help. But the problem is that it's tough to provide effective help without a full understanding of your issue. Respondents need to know what you’re trying to do, what you’ve tried and what you’re working with. So you settle in for a game of internet pong. Your question pings back and forth between you and your unknown “helper”. Until finally your query is answered. Or one of you gives up. All the while sucking up your valuable time.

Frustrating, isn’t it?

Wouldn’t it be great if, in addition to support and Q&A forums, you could have an actual, live conversation, working out all the details of your malady?

Where you could quickly get to the root of the issue or learn how to properly apply a new feature to your program?

Now you can!

Introducing Ask TOM Office Hours

These are scheduled, live Q&A sessions. Hosted by Oracle Database Product Managers, evangelists and even developers. The Oracle product experts. Ready to help you get the best out of Oracle technology.

And the best part: Ask TOM Office Hours sessions are 100% free!

Office Hours continues the pioneering tradition of Ask TOM. Launched in 2000 by Tom Kyte, the site now has a dedicated team who answer hundreds of questions each month. Together they’ve helped millions of developers understand and use Oracle Database.

Office Hours takes this service to the next level, giving you live, direct access to a horde of experts within Oracle. All dedicated to helping you get the most out of your Oracle investment. To take advantage of this new program, visit the Office Hours home page and find an expert who can help . Sign up for the session and, at the appointed hour, join the webinar. There you can put your questions to the host or listen to the Q&A of others, picking up tips and learning about new features.

Each session will have a specific focus, based on the presenter’s expertise. But you are welcome to ask other questions as well.

Stuck on a thorny SQL problem? Grill Chris Saxon or Connor McDonald of the Ask TOM team. 

Want to make the most of Oracle Database's amazing In-Memory feature? Andy Rivenes and Maria Colgan will take you through the key steps.

Started a new job and need to get up-to-speed on Multitenant? Patrick Wheeler will help you get going.

Struggling to get bulk collect working? Ask renowned PL/SQL expert, Steven Feuerstein.

Our experts live all over the globe. So even if you inhabit "Middleofnowhereland", you’re sure to find a timeslot that suits you.

You need to make the most of Oracle Database and its related technologies. It's our job to make it easy for you.

Ask TOM Office Hours: Dedicated to Customer Success

View the sessions and sign up now!



9th January 2018 |

Announcing Offline Persistence Toolkit for JavaScript Client Applications

We are excited to announce the open source release on GitHub of the offline-persistence-toolkit for JavaScript client applications, developed by the Oracle JavaScript Extension Toolkit (Oracle JET) team.

The Offline Persistence Toolkit is a client-side JavaScript library that provides caching and offline support at the HTTP request layer. This support is transparent to the user and is done through the Fetch API and an XHR adapter. HTTP requests made while the client device is offline are captured for replay when connection to the server is restored. Additional capabilities include a persistent storage layer, synchronization manager, binary data support and various configuration APIs for customizing the default behavior.

Whilst the toolkit is primarily intended for hybrid mobile applications created using Oracle JET, it can be used within any JavaScript client application that requires persistent storage and/or offline data access.

The Offline Persistence Toolkit simplifies life for application developers by providing a response caching solution that works well across modern browsers and web views. The toolkit covers common caching cases with a minimal amount of application-specific coding, but provides flexibility to cover non-trivial cases as well. In addition to providing the ability to cache complete response payloads, the toolkit supports "shredding" of REST response payloads into objects that can be stored, queried and updated on the client while offline.

The architecture diagram illustrates the major components of the toolkit and how an application interacts with it:

The Offline Persistence Toolkit is distributed as an npm package consisting of AMD modules.

To install the toolkit, enter the following command at a terminal prompt in your app’s top-level directory:

$ npm install @oracle/offline-persistence-toolkit


The toolkit makes heavy use of the Promise API. If you are targeting environments that do not support the Promise API, you will need to polyfill this feature. We recommend the es6-promise polyfill.

The toolkit does not have a dependency on a specific client-side storage solution, but does include a PouchDB adapter. If you plan to use PouchDB for your persistent store, you will need to install the following PouchDB packages:

$ npm install pouchdb pouchdb-find


For more information about how to make use of this toolkit in your Oracle JET application or any other JavaScript application, refer to the toolkit's README, which also provides details about why we developed this toolkit, how to include it into your app, some simple use cases and links to JS Doc and more advanced use cases.

You can also refer to the JET FixItFast sample app that makes use of the toolkit.  You can refer directly to the source code and even use the Oracle JET command line interface to build and deploy the app to see how it works.

I hope you find this toolkit really useful and if you have any feedback, please submit issues on GitHub.

For more technical articles about the Offline Persistence Toolkit, Oracle JET and other products, you can also follow OracleDevs on


22nd December 2017 |

New Release of Node.js Module for Oracle Database: node-oracledb 2.0 is out

It's been perhaps the most requested feature, and it's been delivered! You can now get pre-built binaries with all the required dependencies to connect your Node.js applications to an Oracle Database instance. Version 2.0 is the first release to have pre-built binaries. Node-oracledb 2.0.15, the Node.js add-on for Oracle Database, is now on npm for general use. These are provided for convenience and will make life a lot easier, particularly for Windows users.

With improvements throughout the code and documentation, this release is looking great. There are now over 3000 functional tests, as well as solid stress tests we run in various environments under Oracle's internal testing infrastructure.

Binaries for Node 4, 6, 8 and 9 are also available for Windows 64-bit, macOS 64-bit, and Linux 64-bit (built on Oracle Linux 6).

Simply add oracledb to your package.json dependencies or manually install with:


$ npm install oracledb


Review the CHANGELOG for all changes. For information on migrating see Migrating from node-oracledb 1.13 to node-oracledb 2.0. To know more about this release, go check out the detailed announcement.

Related content



20th December 2017 |

Podcast: Blockchain: Beyond Bitcoin

Blockchain originally gained attention thanks to its connection to Bitcoin. But blockchain has emerged from under the crypto-currency’s shadow to become a powerful trend in enterprise IT -- and something that should be on every developer's radar.  For this program we’ve assembled a panel of blockchain experts to discuss the technology's impact, examine some use cases, and offer suggestions for developers who want to learn more in order to take advantage of the opportunities blockchain represents.


This program was recorded on Thursday November, 9, 2017.


The Panelists

Listed alphabetically

Lonneke Dikmans

Lonneke Dikmans
Chief Product Officer, eProseed, Utrecht, NL
Oracle Developer Champion

John King

John King
Tech Enablement Specialist/Speaker/Trainer/Course Developer, King Training Resources, Scottsdale, AZ

Robert van Molken

Robert van Mölken
Senior Integration / Cloud Specialist, AMIS, Utrecht, NL
Oracle Developer Champion

Arturo Viveros

Arturo Viveros
SOA/Cloud Architect, Sysco AS, Oslo, NO
Oracle Developer Champion


Additional Resources Coming Soon
  • Combating Complexity
    Chris Newcombe, Chris Richardson, Adam Bien, and Lucas Jellema discuss the creeping complexity in software development and strategies heading off the "software apocalypse."
  • DevOps: Can This Marriage be Saved
    Nicole Forsgen, Leonid Igolnik, Alena Prokharchyk, Baruch Sadogursky, Shay Shmeltzer, and Kelly Shortridge discuss the state of DevOps, where organizations get it wrong, and what developers can do to thrive in a DevOps environment.

Never miss an episode! The Oracle Developer Podcast is available via...


6th December 2017 |

Announcing Open Source Jenkins Plugin for Oracle Cloud Infrastructure

Jenkins is a continuous integration and continuous delivery application that you can use to build and test your software projects continuously. Jenkins OCI Plugin is now available on Github and it allows users to access and manage Oracle Cloud Infrastructure resources from Jenkins. A Jenkins master instance with Jenkins OCI Plugin can spin up slaves (Instances) on demand within the Oracle Cloud Infrastructure, and remove the slaves automatically once the Job completes.

After installing Jenkins OCI Plugin, you can add a OCI Cloud option and a Template with the desired Shape, Image, Domain, etc. The Template will have a Label that you can use in your Jenkins Job. Multiple Templates are supported. The Template options include Labels, Domains, Credentials, Shapes, Images, Slave Limits, and Timeouts.

Below you will find instructions for building and installing the plugin, which is available on GitHub:

Installing the Jenkins OCI Plugin

The following section covers compiling and installing the Jenkins OCI Plugin.

Plugins required:
  • credentials v2.1.14 or later
  • ssh-slaves v1.6 or later
  • ssh-credentials v1.13 or later
Compile and install OCI Java SDK:

Refer to OCI Java SDK issue 25. Tested with Maven versions 3.3.9 and 3.5.0.

Step 1 – Download plugin

$ git clone
$ cd oci-java-sdk
$ mvn compile install

Step 2 – Compile the Plugin hpi file

$ git clone
$ cd jenkins-oci-plugin
$ mvn compile hpi:hpi

Step 3 – Install hpi

  • Option 1 – Manage Jenkins > Manage Plugins > Click the Advanced tab > Upload Plugin section, click Choose File > Click Upload

  • Option 2 – Copy the downloaded .hpi file into the JENKINS_HOME/plugins directory on the Jenkins master

Restart Jenkins and “OCI Plugin” will be visible in the Installed section of Manage Plugins.

For more information on configuring the Jenkins Plugin for OCI, please refer to the documentation on the GitHub project. And if you have any issues or questions, please feel free to contact the development team by submitting through the Issues tab.

Related content


6th December 2017 |

Kubernetes, Serverless, and Federation – Oracle at KubeCon 2017

Today at the KubeCon + CloudNativeCon 2017 conference in Austin, TX, the Oracle Container Native Application Development team open sourced two new Kubernetes related projects which we are also demoing here at the show.  First, we have open sourced an Fn Installer for Kubernetes. Fn is an open source serverless project announced this October at Oracle OpenWorld.  This Helm Chart for Fn enables organizations to easily install and run Fn on any Kubernetes deployment including on top of the new Oracle managed Kubernetes service Oracle Container Engine (OCE). 

Second, we have open sourced Global Multi-Cluster Management, a new set of distributed cluster management features for Kubernetes federation that intelligently manages highly distributed applications – “planet-scale” if you will - that are multi-region, hybrid, or even multi-cloud.  In a federated world, many operational challenges emerge - imagine how you would manage and auto-scale global applications or deploy spot clusters on-demand.  For more info, make sure to check out the Multi-Cluster Ops in a Hybrid World session by Kire Filipovski and Vitaliy Zinchenko on Thursday December 7 at 3:50pm!

Pushing Ahead: Keep it Open, Integrated and Enterprise-Grade

Customers are seeking an open, cloud-neutral, and community-driven container-native technology stack that avoids cloud lock-In and allows them to run the same stack in the public cloud as they run locally.  This was our vision when we launched the Container Native Application Development Platform at Oracle OpenWorld 2017 in October.


Since then Oracle Container Engine was in the first wave of Certified Kubernetes platforms announced in November 2017, helping developers and dev teams be confident that there is consistency and portability amongst products and implementations.  

So, the community is now looking for the same assurances from their serverless technology choice: make it open and built in a consistent way to match the rest of their cloud native stack.  In other words, make it open and on top of Kubernetes.  And if the promise of an open-source based solution is to avoid cloud lock-in, the next logical request is to make it easy for DevOps teams to operate across clouds or in a hybrid mode.  This lines up with the three major “asks” we hear from customers, development teams and enterprises: their container native platform must be open, integrated, and enterprise-grade:

  • Open: Open on Open

Both the Fn project and Global Multi-Cluster Management are cloud neutral and open source. Doubling down on open, the Fn Helm Chart enables the open serverless project (Fn) to run on the leading open container orchestration platform (Kubernetes).   (Sure beats closed on closed!)  The Helm Chart deploys a fully functioning cluster of Fn on a Kubernetes cluster using the Helm package manager.

  • Integrated: Coherent and Connected

Delivering on the promise of an integrated platform, both the Fn Installer Helm Charts and Global Multi-Cluster Management are built to run on top of Kubernetes and thus integrate natively into Oracle’s Container Native Platform.  While having one of everything works in a Home Depot or Costco, it’s no way to create an integrated, effortless application developer experience – especially at scale across hundreds if not thousands of developers across an organization.  Both the Fn installer and Global Multi-Cluster Management will be available on top of OCE, our managed Kubernetes service

  • Enterprise-Grade: HA, Secure, and Operationally Aware

With the ability to deploy Fn to an enterprise-grade Kubernetes service such as Oracle Container Engine you can run serverless on a highly-available and secure backend platform.  Furthermore, Global Multi-Cluster Management extends the enterprise platform to multiple clusters and clouds and delivers on the enterprise desire for better utilization and capacity management. 

Production operations for large distributed systems is hard enough in a single cloud or on-prem, but becomes even more complex with federated deployments – such as multiple clusters applied across multi-regions, hybrid (cloud/on-prem), and multi-cloud scenarios.  So, in these situations, DevOps teams need to deploy and auto-scale global applications or spot clusters on-demand and enable cloud migrations and hybrid scenarios.

With Great Power Comes Great Responsibility (and Complexity)

So, with the power of Kubernetes federation comes great responsibility and new complexities: how to deal with challenge of applying application-aware decision logic to container native deployments.  Thorny business and operational issues could include cost, regional affinity, performance, quality of service, and compliance.  When DevOps teams are faced with managing multiple Kubernetes deployments they can also struggle with multiple cluster profiles, deployed on a mix of on-prem and public cloud environments.  These are basic DevOps question that are hard questions to answer:

  • How many clusters should we operate?
    • Do we need separate clusters for each environment?
    • How much capacity do we allocate for each cluster?
  • Who will manage the lifecycle of the clusters?
  • Which cloud is best suited for my application?
  • How do we avoid cloud lock-in?
  • How do we deploy applications to multiple clusters?

The three open source components that make up Global Multi-Cluster Management are: (1) Navarkos (which means Admiral in Greek) enables a Kubernetes federated deployment to automatically manage multi-cluster infrastructure and manage clusters in response to federated Kubernetes application deployments; (2) Cluster Manager provides lifecycle management for Kubernetes clusters using a Kubernetes federation backend; and (3) the Federated Ingress Controller is an alternative implementation of federated ingress using external DNS.

Global Multi-Cluster Management works with Kubernetes federation to solve these problems in several ways:

  • Creates Kubernetes clusters on demand and deploys apps to them (only when there is a need)
    • Clusters can be run on any public or private cloud platform
    • Runs the application matching supply and demand
  • Manages cluster consistency and cluster life-cycle
    • Ingress, nodes, network
  • Control multi-cloud application deployments
    • Control applications independently of cloud provider
  • Application-aware clusters
    • Clusters are offline when idle
    • Workloads can be auto-scaled automatically
    • Provides the basis to help decide where apps run based on factors that could include cost, regional affinity, performance, quality of service and compliance

Global Multi-Cluster Management ensures that all of the Kubernetes clusters are created, sized and destroyed only when there is a need for them based on the requested application deployments.  If there are no application deployments, then there are no clusters. As DevOps teams deploy various applications to a federated environment, then Global Multi-Cluster Management makes intelligent decisions if any clusters should be created, how many of them, and where.  At any point in time the live clusters are in tune with the current demand for applications, and the Kubernetes infrastructure becomes more application and operationally aware.

See Us at Booth G8, Join our Sessions, & Learn More at KubeCon + CloudNativeCon 2017

Come see us at Booth G8 and meet our engineers and contributors!  As a local Austin native (and for the rest of the old StackEngine team) we’re excited to welcome you all (y’all) to Austin.  Make sure to join in to “Keep Cloud Native Weird.”    And be fixin’ to check out these sessions:



4th December 2017 |

Announcing The New Open Source WebLogic Monitoring Exporter on GitHub

As it runs, WebLogic Server generates a rich set of metrics and runtime state information that provides detailed performance and diagnostic data about the servers, clusters, applications, and other resources that are running in a WebLogic domain. To give our users the best possible experience when running WebLogic domains in Docker/Kubernetes environments, we have developed the WebLogic Monitoring Exporter. This new tool exposes WebLogic Server metrics that can be read and collected by monitoring tools such as Prometheus, and displayed in Grafana.

We are also making the WebLogic Monitoring Exporter tool available as open source on GitHub, which will allow our community to contribute to this project and be part of enhancing it. 

The WebLogic Monitoring Exporter is implemented as a web application that is deployed to the WebLogic Server instances that are to be monitored. The exporter uses the WebLogic Server 12.2.1.x RESTful Management Interface for accessing runtime state and metrics.  With a single HTTP query, and no special setup, it provides an easy way to select the metrics that are monitored for a managed server.

For detailed information about the design and implementation of the WebLogic Monitoring Exporter, see Exporting Metrics from WebLogic Server.

Prometheus collects the metrics that have been scraped by the WebLogic Monitoring Exporter. By constructing Prometheus-defined queries, you can generate any data output you require to monitor and diagnose the servers, applications, and resources that are running in your WebLogic domain.

We can use Grafana to display these metrics in graphical form.  Connect Grafana to Prometheus, and create queries that take the metrics scraped by the WebLogic Monitoring Exporter and display them in dashboards.

For more information, see Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes.

Get Started!

Get started building and deploying the WebLogic Monitoring Exporter, setup Prometheus and Grafana, and monitor the metrics from the WebLogic Managed servers in a domain/cluster running in Kubernetes. 

  • Clone the source code for the WebLogic Monitoring Exporter from GitHub.
  • Build the WebLogic Monitoring Exporter following the steps in the README file.
  • Install both Prometheus and Grafana in the host where you are running Kubernetes.  
  • Start a WebLogic on Kubernetes domain; find a sample in GitHub.
  • Deploy the WebLogic Monitoring Exporter to the cluster where the WebLogic Managed servers are running.
  • Follow the blog entry Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes that steps you through collecting metrics in Prometheus and display them in Grafana dashboards.

We welcome you to try this out. It's a good start to making the transition to open source monitoring tools.  We can work together to enhance it and take full advantage of its functionality in Docker/Kubernetes environments.



1st December 2017 |

Updates to Oracle Cloud Infrastructure CLI
pre { white-space: pre-wrap; /* css-3 */ white-space: -moz-pre-wrap; /* Mozilla, since 1999 */ white-space: -pre-wrap; /* Opera 4-6 */ white-space: -o-pre-wrap; /* Opera 7 */ word-wrap: break-word; /* Internet Explorer 5.5+ */ margin-bottom: 30px; }

We’ve been hard at work the last few months making updates to our command line interface for Oracle Cloud Infrastructure, and wanted to take a minute to share some of the new functionality! The full list of new features and services can be found in our changelog on GitHub, and below are a few core features we wanted to call out specifically:


We know how tedious it can be to type out the same values again and again while using the CLI, so we have added the ability to specify default values for parameters. The example below shows a sample oci_cli_rc file which sets two defaults: one at a global level which will be applied to all operations with a --compartment-id parameter, and one for only ‘os’ (object storage) commands which will be applied to all ‘os’ commands with a --namespace parameter.

Content of ~/.oci/oci_cli_rc:

[DEFAULT] # globally scoped default for all operations with a --compartment-id parameter compartment-id= ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2… # default for --namespace scoped specifically to Object Storage commands os.namespace=mynamespace

Example commands that no longer need explicit parameters:

oci compute instance list # no --compartment-id needed oci os bucket list  # no --compartment-id or --namespace needed


Command and parameter aliases

To help with specifying long command and parameter names, we have also added support for defining aliases. The example oci_cli_rc file below shows examples of defining aliases for commands and parameters:

Content of ~/.oci/oci_cli_rc:

[OCI_CLI_PARAM_ALIASES] --ad=--availability-domain -a=--availability-domain --dn=--display-name [OCI_CLI_COMMAND_ALIASES] # This lets you use "ls" instead of "list" for any list command in the CLI (e.g. oci compute instance ls) ls = list # This lets you do "oci os object rm" rather than "oci os object delete" rm = os.object.delete Table output

JSON output is great for parsing but can be problematic when it comes to readability on the command line. To help with this we have added table output format which can be triggered for any operation by supplying --output table. This also makes it easier to use common tools like grep and awk on the CLI output to grab specific records from a table. See the section on JMESPath below to see how you can filter data to make your table output more concise.

Here is an example command and output:

oci iam region list --output table +-----+--------------------+ | key | name | +-----+--------------------+ | FRA | eu-frankfurt-1 | | IAD | us-ashburn-1 | | PHX | us-phoenix-1 | +-----+--------------------+ JMESPath queries

Often times a CLI operation will return more data than you are interested in. To help with filtering and querying data from CLI responses, we have added the --query option which allows running arbitrary JMESPath ( queries on the CLI output before the data is returned.

For example, you may want to list all of the instances in your compartment but only want to see the display-name and lifecycle-state, you can do this with the following query:

# using the oci_cli_rc file from above so we don’t have to specify --compartment-id oci compute instance list --query 'data[*].{"display-name":"display-name","lifecycle-state":"lifecycle-state"}'

This is especially convenient for use with table output so you can limit the output to a size that will fit in your terminal.

You can also define queries in your oci_cli_rc file and reference them by name so you don’t have to type out complex queries, for example:

Content of ~/.oci/oci_cli_rc:

[OCI_CLI_CANNED_QUERIES] get_id_and_display_name_from_list=data[*].{id: id, "display-name": “display-name"}

Example command:

oci compute instance list -c $C --query query://get_id_and_display_name_from_list

To help getting started with some of these features we have added the command 'oci setup oci-cli-rc' to generate a sample oci_cli_rc file with examples of canned queries, defaults, and parameter / command aliases.

JSON Input made easier

We have made a number of improvements to how our CLI works with complex parameters that require JSON input:

Reading JSON parameters from a file:

For any parameter marked as a "COMPLEX TYPE" you can now specify the value to be read from a file using the "file://" prefix instead of needing to format a JSON string on the command line. For example:

oci iam policy create —statements file://statements.json

Generate JSON skeletons for single parameter

To help with specifying JSON input from a file we have added --generate-param-json-input to each command with complex parameters to enable generating a JSON template for a given input parameter. For example, if you are not sure of the format for the oci iam policy create --statements parameter you can issue the following command to generate a template:

oci iam policy create --generate-json-param-input statements output: [ “string”, “string” ]

You can then fill out this template and specify it as the input to a create policy call like so:

oci iam policy create --statements file://statements.json

Generate JSON skeletons for full command input

We also support generating a JSON skeleton for the full command input. A common workflow with this parameter is to dump the full JSON skeleton to a file, edit the file with the input values you want, and then execute the command using that file as input. Here is an example:

# command to emit full JSON skeleton for command to a file input.json oci os preauth-request create --generate-full-command-json-input > input.json # view content of input.json and edit values cat input.json { "accessType": "ObjectRead|ObjectWrite|ObjectReadWrite|AnyObjectWrite", "bucketName": "string", "name": "string", "namespace": "string", "objectName": "string", "opcClientRequestId": "string", "timeExpires": "2017-01-01T00:00:00.000000+00:00" } # run create pre-authenticated request with the values specified from a file oci os preauth-request create --from-json file://input.json Windows auto-complete for power shell

We have now added tab completion for Windows PowerShell! Completion works on commands and parameters and can be enabled with the following command:

oci setup autocomplete

For more in-depth documentation on these features and more, check out our main CLI documentation page here.

Related content


1st December 2017 |

Announcing Mobile Authentication Plugin for Apache Cordova, and More!

We are excited to announce the open source release on GitHub of the cordova-plugin-oracle-idm-auth plugin for Apache Cordova, developed by the Oracle JavaScript Extension Toolkit (Oracle JET) team.

This plugin provides a simple JavaScript API for performing complex authentication, powered by a native SDK developed by the Oracle Access Management Mobile & Social (OAMMS) team that has been tested and verified against Oracle Access Manager (OAM) and Oracle Identity Cloud Service (IDCS) and is compatible with other 3rd party authentication applications that support Basic Authentication, OAuth, Web SSO or OpenID Connect.

Whilst the plugin is primarily intended for hybrid mobile applications created using Oracle JET, it can be used within any Cordova-based app targeting Android or iOS.

Most mobile authentication scenarios are complex, often requiring interaction with the native operating system for use cases such as:

  • Retrieving authentication tokens and cookies following successful authentication
  • Securely storing tokens and user credentials
  • Performing offline authentication and automatic login

Writing code to handle each of the required authentication scenarios, especially within hybrid mobile applications, is tedious and can be error-prone.

The cordova-plugin-oracle-idm-auth plugin significantly reduces the amount of coding required to successfully authenticate your users and handle various error cases, by abstracting the complex logic behind a set of simple JavaScript APIs, thus allowing you to focus on implementation of your mobile app’s functional aspects.

To add this plugin to your Oracle JET app:

$ ojet add plugin cordova-plugin-oracle-idm-auth


To learn more about the Oracle JET CLI, visit the ojet-cli project.

To add this plugin to your plain Apache Cordova app:

$ cordova plugin add cordova-plugin-oracle-idm-auth


Although the plugin itself contains detailed documentation, stay tuned for more technical posts describing common usage scenarios.

The release of this plugin continues Oracle’s commitment to the open source Apache Cordova community, along with these previously released plugins:

Hope you enjoy, and if you have any feedback, please submit issues to our Cordova projects on GitHub.

For more technical articles, you can also follow OracleDevs on

Related content



22nd November 2017 |

Introducing Data Hub Cloud Service to Manage Apache Cassandra and More

Today we are introducing the general availability of the Oracle Data Hub Cloud Service. With Data Hub, developers are now able to initialize and run Apache Cassandra clusters on-demand without having to manage backups, patching and scaling for Cassandra clusters. Oracle Data Hub is a foundation for other databases like MongoDB, Postgres and more coming in the future. Read the full press release from OpenWorld 2017.

The Data Hub Cloud Service provides the following key benefits:

  • Dynamic Scalability – users will have access to an API and a web console interface to easily operate in minutes things such as scale-up/scale-down or scale-out/scale-in, and size their clusters accordingly to their needs.
  • Full Control –as development teams migrate from an on premise environment to the cloud, they continue to have full secure shell (ssh) access to the underlying virtual machines (VMs) hosting these database clusters so that they can login and perform management tasks in the same way they have been doing.

Developers may be looking for more than relational data management for their applications. MySQL and Oracle Database have been around for quite some time already on Oracle Cloud. Today, application developers are looking for the flexibility to choose the database technology according to the data models they use within their application. This use case specific approach enables these developers to choose the Oracle Database Cloud Service when appropriate and in other cases choose other database technologies such as MySQL, MongoDB, Redis, Apache Cassandra etc.

In such a polyglot development environment, the enterprise IT faces the key challenge of how to support as well as lower the total cost of ownership (TCO) of managing such open source database technologies within the organization. This is specifically the problem that the Oracle Data Hub Cloud Service addresses. How to Use Data Hub Cloud Service

Using the Data Hub Cloud Service to provision, administer or monitor an Apache Cassandra database cluster is extremely simple and easy. You can create an Apache Cassandra database cluster with as many nodes as you would like in 2 simple steps:

  • Step 1
    • Choose between Oracle Cloud Infrastructure and Oracle Cloud Infrastructure Classic regions
    • Choose between the latest (3.11) and stable (3.10) Apache Cassandra database versions
  • Step 2
    • Choose the cluster size, compute shape (processor cores) and the storage size. Don't worry about choosing the right value here. You can always dynamically resize when you need additional compute power or storage.
    • Provide the shell access information so that you have the full control to your database clusters.

Flexibility to choose the Database Version

When you create the cluster, you have the flexibility to choose the Apache Cassandra versions. Additionally, you can easily patch to the latest version, as it becomes available for the Cassandra version. Once you choose to apply the patch, the service applies this patch within your cluster in a rolling fashion to minimize any downtime.

Dynamic Scaling

During provisioning, you have the flexibility to choose the cluster size, the compute shapes (compute core and memory), and the storage sizes for all the nodes within the cluster. This flexibility allows you to choose the compute and storage shapes that better meet your workload and performance requirements.
If you want to add either additional nodes in your cluster (commonly referred as scale-out) or additional storage to your nodes in the cluster, you can easily do so using the Data Hub Cloud Service API or Console. So, you don't have to worry about sizing your workload at the time of provisioning.

Full Control

You have full shell access to all the nodes within the cluster so that you have full control to the underlying database and its storage. You also have the full flexibility to login to these nodes and configure the database instances to meet your scalability and performance requirements.

Once you select Create, the service will create the compute instances, attach the block volumes to the node and then lay out the Apache Cassandra binaries within each of the nodes in the cluster. In the Oracle Cloud Infrastructure Classic platform, the service will also automatically enable the network access rules so that users can now begin to use CQL (Cassandra Query Language) tool to create your Cassandra database. In the Oracle Cloud Infrastructure platform, you have the full control and flexibility to create this cluster within a specific subnet in the virtual cloud network (VCN).

Getting Started

This service is accessible via the Oracle My Services dashboard for users already under the Universal Credits. And, if you're not already using the Oracle Cloud, you can start off with a free Cloud credits to explore the services. Appreciate if you can kindly give this service a spin and share your feedback.

Additional Reference


22nd November 2017 |

Linuxgiving! The Things We do With and For Oracle Linux

By: Sergio Leunissen - VP, Operating Systems & Virtualization 

It is almost Thanksgiving, so you may be thinking about things that you’re thankful for –good food, family and friends.  When it comes to making your (an enterprise software developer’s) work life better, your list might include Docker, Kubernetes, VirtualBox and GitHub. I’ll bet Oracle Linux wasn’t on your list, but here’s why it should be…

As enterprises move to the Cloud and DevOps increases in importance, application development also has to move faster. Here’s where Oracle Linux comes in. Not only is Oracle Linux free to download and use, but it also comes pre-configured with access to our Oracle Linux yum server with tons of extra packages to address your development cravings, including:

If you’re still craving something sweet, you can add less complexity to your list as with Oracle Linux you’ll have the advantage of runningthe exact same OS and version in development as you do in production (on-premises or in the cloud).

Related content

And, we’re constantly working on ways to spice-up your experience with Linux, from things as simple as "make it boot faster," to always-available diagnostics for network filesystem mounts, to ways large systems can efficiently parallelize tasks. These posts, from members of the Oracle Linux Kernel Development team, will show you how we are doing this:

Accelerating Linux Boot Time

Pasha Tatashin describes optimizations to the kernel to speed up booting Linux, especially on large systems with many cores and large memory sizes.

Tracing NFS: Beyond tcpdump

Chuck Leverdescribes how we are investigating new ways to trace NFS client operations under heavy load and on high performance network fabrics so that system administrators can better observe and troubleshoot this network file system.

ktask: A Generic Framework for Parallelizing CPU-Intensive Work

Daniel Jordan describes a framework that’s been submitted to the Linux community which makes better use of available system resources to perform large scale housekeeping tasks initiated by the kernel or through system calls.

On top of this, you can have your pumpkin, apple or whatever pie you like and eat it too – since Oracle Linux Premier Support is included with your Oracle Cloud Infrastructure subscription – yes, that includes Ksplice zero down-time updates and much more at no additional cost.

Most everyone's business runs on Linux now, it's at the core of today’s cloud computing. There are still areas to improve, but if you look closely, Oracle Linux is the OS you’ll want for app/dev in your enterprise.


15th November 2017 |

Podcast: What's Hot? Tech Trends That Made a Real Difference in 2017

Innovation never sleeps, and tech trends come at you from every angle. That's business as usual in the software developer's world. In 2017, microservices, containers, chatbots, blockchain, IoT, and other trends drew lots of attention and conversation. But what trends and technologies penetrated the hype to make a real difference?

In order to get a sense of what's happening on the street, we gathered a group of highly respected software developers, recognized leaders in the community, crammed them into a tiny hotel room in San Francisco (they were in town to present sessions at JavaOne and Oracle OpenWorld), tossed in a couple of microphones, and asked them to talk about the technologies that actually had an impact on their work over the past year. The resulting conversation is lively, wide-ranging, often funny, and insightful from start to finish. Listen for yourself.

The Panelists

(listed alphabetically)

Lonneke Dikmans Lonneke Dikmans
Chief Product Officer, eProseed
Oracle ACE Director
Developer Champion


Lucas Jellema
Chief Technical Officer, AMIS Services
Oracle ACE Director
Developer Champion


Frank Munz
Software Architect, Cloud Evangelist, Munz & More
Oracle ACE Director
Developer Champion


Pratik Patel
Chief Technical Officer, Triplingo
President, Atlanta Java Users Group
Java Champion
Code Champion


Chris Richardson
Founder, Chief Executive Officer, Eventuate Inc.
Java Champion
Code Champion


Additional Resources Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:



9th November 2017 |

An API First Approach to Microservices Development

By Claudio Caldato, Sr. Director Development and Boris Scholl, VP Development - Microservices, Oracle Cloud


Over the last couple of years our work on various microservices platforms in the cloud has brought us into close collaboration and engagement with many customers and as a result we have developed a deep understanding of what developers struggle with when adopting microservices architectures in addition to a deep knowledge of distributed systems. A major motivation for joining Oracle, besides working with a great team of very smart people from startups, Amazon and Microsoft, was the opportunity to build from scratch a platform based on open source components that truly addresses the developer. In this initial blog post on our new platform we will describe what was driving the design of our platform, and present an overview of the architecture. 

What developers are looking for

Moving to microservices is not an easy transition for developers that have been building applications using more traditional methods. There are a lot of new concepts and details developers need to become familiar with and consider when they design a distributed application, which is what a microservice application is. Throw containers and orchestrators into the mix and it becomes clear why many developers struggle to adapt to this new world.  

Developers now need to think about their applications in terms of a distributed system with a lot of moving parts; as a result, challenges such as resiliency, idempotency and eventual consistency, just to name a few, are important aspects they now need to take into account. 

In addition, with the latest trends in microservices design and best practices, they also need to learn about containers and orchestrators to make their applications and services work. Modern cluster management and container orchestration solutions such as Kubernetes, Mesos/Marathon or Docker Swarm are improving over time, which simplifies things such as networking, service discovery, etc., but they are still an infrastructure play. The main goal of these tools and technologies is to handle the process of deploying and connecting services, and guarantee that they keep running in case of failures. These aspects are more connected with the infrastructure used to host the services than the actual services themselves. Developers need to have a solid understanding of how orchestrators work, and they need to take that into account when they build services. Programming model and infrastructure are entangled; there is no clear separation, and developers need to understand the underlying infrastructure to make their services work. 

One obvious thing that we have heard repeatedly from our customers and the open source community is that developers really want to focus on the development of the logic, not on the code necessary to handle the execution environment where the service will be deployed, but what does that really mean?  

It means that above all, developers want to focus on APIs (the only thing needed to connect to another service), develop their services in a reactive style, and sometimes just use ‘functions’ to perform simple operations, when deploying and managing more complex services involves too much overhead.  

There is also a strong preference among developers to have a platform built on an OSS stack to avoid vendor lock-in, and to enable hybrid scenarios where public cloud is used in conjunction with on-premise infrastructure.  

It was the copious feedback heard from customers and developers that served as our main motivation to create an API-first microservices platform, and it is based on the following key requirements: 

  • Developers can focus solely on writing code: API-first approach 
  • It combines the traditional REST-based programming model with a modern reactive event-driven model  
  • It consolidates traditional container-based microservices with a serverless/FaaS infrastructure, offering more flexibility so developers can pick the right tool for the job 
  • Easy onboarding of 'external' services so developers can leverage things such as cloud services, and can connect to legacy or 3rd party services easily 

We were asked many times how we would describe our platform as it covers more than just microservices, so in a humorous moment, we came up with the Grand Unified Theory of Container Native Development


The Platform Approach 

So what does the platform look like and what components are being used? Before we get into the details let’s look at our fundamental principles for building out this platform:

  • Opinionated and open: make it easy for developers to get productive right away, but also provide the option to go deep in the stack or even replace modules. 
  • Cloud vendor agnostic: although the platform will work best on our New Application Development Stack customers need to be able to install it on top of any cloud infrastructure. 
  • Open source-based stack: we are strong believers in OSS, and our stack is entirely built upon popular OSS components and will be available as OSS 

The Platform Architecture 

Figure 1 shows the high level architecture of our platform and the functionality of each component. 

Let’s look at all the major components of the platform. We start with the API registry as it changes how developers think about, build, and consume microservices. 

API Registry: 

The API registry stores all the information about available APIs in the cluster. Developers can publish an API to make it easier for other developers to use their service. Developers can search for a particular service or function (if there is a serverless framework installed in the cluster). Developers can test an API against a mock service even though the real service is not ready or deployed yet. To connect to a microservice or function in the cluster, developers can generate a client library in various languages. The client library is integrated into the source code and used to call the service. It will always automatically discover the endpoint in the cluster at runtime so developers don’t have to deal with infrastructure details such as IP address or port number that may change over the lifecycle of the service.  In future versions, we plan to add the ability for developers to set security and routing policies directly in the API registry. 

Event Manager: 

The event manager allows services and functions to publish events that other services and functions can subscribe to. It is the key component that enables an event-driven programming model where EventProviders publish events, and consumers – either functions or microservices – consume them. With the EventManager developers can combine both a traditional REST-based programming model with a reactive/event-driven model in a consolidated platform that offers a consistent experience in terms of workflow and tools. 

Service Broker: 

In our transition to working for a major cloud vendor, we have seen that many customers choose to use managed cloud services instead of running and operating their services themselves on a Kubernetes cluster. A popular example of this is Redis cache, offered as a managed service by almost all major cloud providers. As a result, it is very common that a microservice-based application not only consists of services developed by the development team but also of managed cloud services. Kubernetes has introduced a great new feature called service catalog which allows the consumption of external services within a Kubernetes cluster. We have extended our initial design to not only configure the access to external services, but also to register user services with the API registry, so that developers can easily consume them along with the managed services. 

In this way external services, such as the ones provided by the cloud vendor, can be consumed like any other service in the cluster with developers using the same workflow: identify the APIs they want to use, generate the client library, and use it to handle the actual communication with the service. 

Service Broker is also our way to help developers engaged in modernizing their existing infrastructure, for instance by enabling them to package their existing code in containers that can be deployed in the cluster. We are also considering solving for scenarios in which there are existing applications that cannot be modernized; in this case, the Service Broker can be used to ‘expose’ a proxy service that publishes a set of APIs in the API Registry, thereby making the consumption of the external/legacy system similar to using any other microservice in the cluster.  

Kubernetes and Istio: 

We chose Kubernetes as the basis for our platform as it is emerging as the most popular container management platform to run microservices. Another important factor is that the community around Kubernetes is growing rapidly, and that there is Kubernetes support with every major cloud vendor.   

As mentioned before one of our main goals is to reduce complexity for developers. Managing communications among multiple microservices can be a challenging task. For this reason, we determined that we needed to add Istio as a service mesh to our platform. With Istio we get monitoring, diagnostics, complex routing, resiliency and policies for free. This removes a big burden from developers as they would otherwise need to implement those features; with Istio, they are now available at the platform level. 


Monitoring is an important component of a microservices platform. With potentially a lot of moving parts, the system requires having a way to monitor its behavior at runtime. For our microservices platform we chose to offer an out-of-the-box monitoring solution which is, like the other components in our platform, based on well consolidated and battle-tested technologies such as Prometheus, Zipkin/Jaeger, Grafana and Vizsceral. 

In the spirit of pushing the API-first approach to monitoring as well, our monitoring solution offers developers the ability to see how microservices are connected to each other (via Vizsceral), see data flowing across them and, in the future, will show insight into which APIs have been used. Developers can then use distributed tracing information in Zipkin/Jaeger to investigate potential latency issues or improve the efficiency of their services. In the future, we plan to add integration with other services. For instance, we will add the ability to correlate requests between microservices with data structures inside the JVM so developers can optimize across multiple microservices by following how data is being processed for each request. 

What’s Next? 

This is an initial overview of our new platform and some insight into our motivation, and the design guidelines that we used. We will follow with more blogs that will go deeper into the various aspects of the platform as we get closer to our initial OSS release early 2018. Meanwhile, please take a look at our JavaOne session

For more background on this topic, please see our other blog posts in the Getting Started with Microservices series. Part 1 discusses some of the main advantages of microservices, and touches on some areas to consider when working with them. Part 2 considers how containers fit into the microservices story. Part 3 looks at some basic patterns and best practices for implementing microservices. Part 4 examines the critical aspects of using DevOps principles and practices with containerized microservices. 

Related content


8th November 2017 |

Introducing Dev Gym! Free Training on SQL and More

There are many ways to learn. For example, you can read a book or blog post, watch a video, or listen to a podcast. All good stuff, which is what you'd expect me to say since I am the author of ten books on the Oracle PL/SQL language, and offer scores of videos and articles on my YouTube channel and blog, respectively.

But there's one problem with those learning formats: they're passive. One way or another, you sit there, and ingest data through your eyes and ears. Nothing wrong with that, but we all know that when it comes to writing code, that sort of knowledge is entirely theoretical.

If you want to get stronger, you can't just read about weightlifting and running. 

You've got to hit the gym and lift some weights. You've got to put on your running shoes and pound the pavement. 

Or as Confucius said it back in 450 BC:

Tell me and I will forget.
Show me and I may remember.
Involve me and I will understand

It's the same with programming. Until you start writing code, and until you start reading and struggling to understand code, you haven't really learned anything.  To get good at programming, you need to engage in some active learning.

That's what the Oracle Dev Gym is all about. And it's absolutely, totally free. 

Learn from Quizzes

Multiple choice quizzes are the core learning mechanism on the Oracle Dev Gym. Our library of over 2,500 quizzes deepen your expertise by challenging you to read and understand code, a great complement to writing and running code.

The home page offers several featured quizzes that are hand-picked by experts from the Dev Gym's library of over 2,000 quizzes.

Looking for something in particular? Enter a keyword or two in the search bar and we'll show you what we've got on that topic.

After submitting your answer, you can explore the quiz's topic in more detail, with full verification code scripts, links to related resources and other quizzes, and discussion on the quiz.

You accumulate points for all the quizzes you answer, but your performance on these quizzes is not ranked. To play competitively against other developers, try our weekly Open Tournaments.

Check out this video on Dev Gym quizzes. 

Learn from Workouts

Quizzes are great, but when you know nothing about the topic of a quiz, they can leave you rather more confused than educated.

So to help you get started with concepts, we’ve created workouts. These contain resources to teach you about an aspect of programming, followed up by questions on the topic to test and reinforce your newly-gained knowledge.

A workout typically consists of a video or article followed by several quizzes. But a workout could also consist simply of a set of quizzes. Either way, go through the exercises of the workout and you will find yourself better able to tackle your real world programming challenges. Build your own custom workout, pick from available workouts, and set up daily workouts (single quiz workouts that expire each day).

Check out this video on Dev Gym workouts. 

Learn from Classes

Perhaps you’re looking for something more structured to help you learn. Then a Dev Gym class might be a perfect fit.

You can think of these as "mini-MOOCS". A MOOC is a massive online open class. The Oracle Learning Library offers a variety of MOOCs and I strongly encourage you to try them out. Generally, you should expect a 3-5 hour per week commitment, over several weeks. 

Dev Gym class are typically lighter-weight. Each class module consists of a video or blog post, followed by several quizzes to reinforce what you've learned. 

A great example of a Dev Gym class is Database of Developers, a 12-week course by Chris Saxon, a member of the AskTOM Answer Team and all around SQL wizard.

Check out this video on Dev Gym classes. 

Open Tournaments

Sometimes you just want to learn, and other times you want to test that knowledge against other developers. Let's face it: lots of humans like to compete, and we make it easy for you to do that with our weekly Open tournaments.

Each Saturday, we publish a brand-new quiz on SQL, PL/SQL, database design and logic (this list will likely grow over time). You have until the following Friday to submit your answer. And if you don't want to compete but still want to tackle those brand-new quizzes, we let you opt-out of ranking.

But for those of you who like to compete, you can check your rankings on the Leaderboard to see how you did the previous week, month, quarter and year. And if finish the year ranked in the top 50 in a particular technology, you are then eligible to compete in the annual championship.

Note that we do not show the results of your submission for an Open tournament until that week is over. Since the quiz is competitive, we don't want to make it easy for players to share results with others who may not yet have taken the quiz. And since the quiz is competitive, we also have rules against cheating. Read Competition Integrity for a description of what constitutes cheating at the Oracle Dev Gym.

Work Out Those Oracle Muscles!

So...are you ready to start working out those Oracle muscles and stretch your Oracle skills?

Visit the Oracle Dev Gym. Take a quiz, step up to a workout, or explore our classes.

Oh, and did I mention? It's all free!