Oracle Blogs | Oracle Developers Blog

 

18th July 2018 |

A Quick Look At What's New In Oracle JET v5.1.0

On June 18th, the v5.1.0 release of Oracle JET was made available. It was the 25th consecutive on-schedule release for Oracle JET. Details on the release schedule are provided here in the FAQ.

As indicated by the release number, v5.1.0 is a minor release, aimed at tweaking and consolidating features throughout the toolkit. As in other recent releases, new features have been added to support development of composite components, following the Composite Component Architecture (CCA). For details, see the entry on the new Template Slots in Duncan Mills's blog. Also, take note of the new design time metadata, as described in the release notes

Aside from the work done in the CCA area, the key new features and enhancements to be aware of in the release are listed below, sorted alphabetically:

Component Enhancement Description oj-chart New "data" attribute. Introduces new attributes, slots, and custom elements. oj-film-strip New "looping" attribute. Specifies filmstrip navigation behavior, bounded ("off) or looping ("page"). oj-form-layout Enhanced content flexibility. Removes restrictions on the types of children allowed in the "oj-form-layout" component. oj-gantt New "dnd" attribute and "ojMove" event.  Provides new support for moving tasks via drag and drop. oj-label-value New component. Provides enhanced layout flexibility for the "oj-form-layout" component. oj-list-view Enhanced "itemTemplate" slot. Supports including the <LI> element in the template. oj-swipe-actions New component. Provides a declarative way to add swipe-to-reveal functionality to items in the "oj-list-view" component.

For all the details on the items above, see the release notes.

Note: Be aware that in Oracle JET 7.0.0, support for Yeoman and Grunt will be removed from generator-oraclejet and ojet-cli. As a consequence, the ojet-cli will be the only way to use the Oracle JET tooling, e.g., to create new Oracle JET projects from that point on. Therefore, if you haven't transferred from using Yeoman and Grunt to ojet-cli yet, e.g., to command line calls such as "ojet create", take some time to move in that direction before the 7.0.0 release.

As always, your comments and constructive feedback are welcome. If you have questions, or comments, please engage with the Oracle JET Community in the Discussion Forums and also follow @OracleJET on Twitter.

For organizations using Oracle JET in production, you're invited to be highlighted on the Oracle JET site, with the latest addition being a brand new Customer Success Story by Cagemini.

On behalf of the entire Oracle JET development team: "Happy coding!"

 

18th July 2018 |

Vibrant and Growing: The Current State of API Management

"Vibrant and growing all the time!" That's how Andrew Bell, Oracle PaaS API Management Architect at Capgemini, describes the current state of API management. "APIs are the doors to organizations, the means by which organizations connect to one another, connect their processes to one another, and streamline those processes to meet customer needs. The API environment is growing rapidly as we speak," Bell says.

"API management today is quite crucial," says Bell's Capgemini colleague Sander Rensen, an Oracle PaaS lead and architect, "especially for clients who want to go on a journey of a digital transformation. For our clients, the ability to quickly find APIs and subscribe to them is a very crucial part of digital transformation.

"It's not just the public-facing view of APIs," observes Oracle ACE Phil Wilkins, a senior Capgemini consultant specializing in iPaaS. "People are realizing that APIs are an easier, simpler way to do internal decoupling. If I expose my back-end system in a particular way to another part of the organization — the same organization — I can then mask from you how I'm doing transformation or innovation or just trying to keep alive a legacy system while we try and improve our situation," Wilkins explains. "I think that was one of the original aspirations of WSDL and technologies like that, but we ended up getting too fine-grained and tying WSDLs to end products. Then the moment the product changed that WSDL changed and you broke the downstream connections."

Luis Weir, CTO of Capgemini's Oracle delivery unit and an Oracle Developer Champion and ACE Director, is just as enthusiastic about the state of API management, but see's a somewhat rocky road ahead for some organizations. "APIs are one thing, but the management of those APIs is something entirely different," Weir explains

"API management is something that we're doing quite heavily, but I don't think all organizations have actually realized the importance of the full lifecycle management of the APIs. Sometimes people think of API management as just an API gateway. That’s an important capability, but there is far more to it,"

Weir wonders if organizations understand what it means to manage an API throughout its entire lifecycle.

Bell, Rensen, Wilkins, and Weir are the authors of Implementing Oracle API Platform Cloud Service, now available from Packt Publishing, and as you'll hear in this podcast, they bring considerable insight and expertise to this discussion of what's happening in API management. The conversation goes beyond the current state of API management to delve into architectural implications, API design, and how working in SOA may have left you with some bad habits. Listen!

This program was recorded on June 27, 2018.

The Panelists Andrew Bell Andrew Bell
Oracle PaaS API Management Architect, Capgemini
Twitter  LinkedIn  Sander Rensen Sander Rensen
Oracle PaaS Lead and Architect, Capgemini
Twitter  LinkedIn  Luis Weir Luis Weir
CTO, Oracle DU, Capgemini
Oracle Developer Champion
Oracle ACE Director
Twitter LinkedIn Phil Wilkins
Senior Consultant specializing in iPaaS
Oracle ACE
Twitter LinkedIn  Additional Resources Coming Soon

How has your role as a developer, DBA, or Sysadmin changed? Our next program will focus on the evolution of IT roles and the trends and technologies that are driving the changes.

 

12th July 2018 |

Keep Calm and Code On: Four Ways an Enterprise Blockchain Platform Can Improve Developer ...

A guest post by Sarabjeet (Jay) Chugh, Sr. Director Product Marketing, Oracle Cloud Platform

Situation

You just got a cool new Blockchain project for a client. As you head back to the office, you start to map out the project plan in your mind. Can you meet all of your client’s requirements in time? You're not alone in this dilemma.

You attend a blockchain conference the next day and get inspired by engaging talks, meet fellow developers working on similar projects. A lunchtime chat with a new friend turns into a lengthy conversation about getting started with Blockchain.

Now you’re bursting with new ideas and ready to get started with your hot new Blockchain coding project. Right?

Well almost…

You go back to your desk and contemplate a plan of action to develop your smart contract or distributed application, thinking through the steps, including ideation, analysis, prototype, coding, and finally building the client-facing application.

Problem

It is then that the reality sets in. You begin thinking beyond proof-of-concept to the production phase that will require additional things that you will need to design for and build into your solution. Additional things such as:
 

These things may delay or even prevent you from getting started with building the solution. Ask yourself the questions such as:

  • Should I spend time trying to fulfill dependencies of open-source software such as Hyperledger Fabric on my own to start using it to code something meaningful?
  • Do I spend time building integrations of diverse systems of record with Blockchain?
  • Do I figure out how to assemble components such as Identity management, compute infrastructure, storage, management & monitoring systems to Blockchain?
  • How do I integrate my familiar development tools & CI/CD platform without learning new tools?
  • And finally, ask yourself, Is it the best use of your time to figure out scaling, security, disaster recovery, point in time recovery of distributed ledger, and the “illities” like reliability, availability, and scalability?

If the answer to one or more of these is a resounding no, you are not alone. Focusing on the above aspects, though important, will take time away from doing the actual work to meet your client’s needs in a timely manner, which can definitely be a source of frustration.

But do not despair.

You need to read on about how an enterprise Blockchain platform such as the one from Oracle can make your life simpler. Imagine productivity savings multiplied hundreds of thousands of times across critical enterprise blockchain applications and chaincode.

What is an Enterprise Blockchain Platform?

The very term “enterprise”  typically signals a “large-company, expensive thing” in the hearts and minds of developers. Not so in this case, as it may be more cost effective than spending your expensive developer hours to build, manage, and maintain blockchain infrastructure and its dependencies on your own.

As the chart below shows, the top two Blockchain technologies used in proofs of concept have been Ethereum and Hyperledger.


 

Ethereum has been a platform of choice among the ICO hype for public blockchain use. However, it has relatively lower performance, is slower and less mature compared to Hyperledger. It also uses a less secure programming model based on a primitive language called Solidity, which is prone to re-entrant attacks that has led to prominent hacks like the DOA attack that lost $50M recently.  

Hyperledger Fabric, on the other hand, wins out in terms of maturity, stability, performance, and is a good choice for enterprise use cases involving the use of permissioned blockchains. In addition, capabilities such as the ones listed in Red have been added by vendors such as Oracle that make it simpler to adopt and use and yet retain the open source compatibility.

Let’s look at how enterprise Blockchain platform, such as the one Oracle has built that is based on open-source Hyperledger Fabric can help boost developer productivity.

How an Enterprise Blockchain Platform Drives Developer Productivity

Enterprise blockchain platforms provide four key benefits that drive greater developer productivity:

 
Performance at Scale

  • Faster consensus with Hyperledger Fabric
  • Faster world state DB - record level locking for concurrency and parallelization of updates to world state DB
  • Parallel execution across channels, smart contracts
  • Parallelized validation for commit

Operations Console with Web UI

  • Dynamic Configuration – Nodes, Channels
  • Chaincode Lifecycle – Install, Instantiate, Invoke, Upgrade
  • Adding Organizations
  • Monitoring dashboards
  • Ledger browser
  • Log access for troubleshooting

Resilience and Availability

  • Highly Available configuration with replicated VMs
  • Autonomous Monitoring & Recovery
  • Embedded backup of configuration changes and new blocks
  • Zero-downtime patching

Enterprise Development and Integration

  • Offline development support and tooling
  • DevOps CI/CD integration for chaincode deployment, and lifecycle management
  • SQL rich queries, which enable writing fewer lines of code, fewer lines to debug
  • REST API based integration with SaaS, custom apps, systems of record
  • Node.js, GO, Java client SDKs
  • Plug-and-Play integration adapters in Oracle’s Integration Cloud

Developers can experience orders of magnitude of productivity gains with pre-assembled, managed, enterprise-grade, and integrated blockchain platform as compared assembling it on their own.

Summary

Oracle offers a pre-assembled, open, enterprise-grade blockchain platform, which provides plug-and-play integrations with systems of records and applications and autonomous AI-driven self-driving, self-repairing, and self-securing capabilities to streamline operations and blockchain functionality. The platform is built with Oracle’s years of experience serving enterprise’s most stringent use cases and is backed by expertise of partners trained in Oracle blockchain. The platform rids developers of the hassles of assembling, integrating, or even worrying about performance, resilience, and manageability that greatly improves productivity.

If you’d like to learn more, Register to attend an upcoming webcast (July 16, 9 am PST/12 pm EST). And if your ready to dive right in you can sign up for $300 of free credits good for up to 3500 hours of Oracle Autonomous Blockchain Cloud Service usage.

 

5th July 2018 |

Build and Deploy Node.js Microservice on Docker using Oracle Developer Cloud

This is the first blog in the series to come, which will help you understand, how you can build a NodeJS REST microservice application Docker image and push it to DockerHub using Oracle Developer Cloud Service. The next blog in the series would focus on deployment of the container we build here to deploy on Oracle Kubernetes Engine on Oracle Cloud infrastructure.

You can read about the overview of the Docker functionality in this blog.

Technology Stack Used

Developer Cloud Service - DevOps Platform

Node.js Version 6 – For microservice development.

Docker – For Build

Docker Hub – Container repository

 

Setting up the Environment:

Setting up Docker Hub Account:

You should create an account on https://hub.docker.com/. Keep the credentials handy for use in the build configuration section of the blog.

Setting up Developer Cloud Git Repository:

Now login into your Oracle Developer Cloud Service project. And create a Git repository as shown below. You can give a name of your choice to the Git repository. For the purpose of this blog, I am calling it NodeJSDocker. You can copy the Git repository URL and keep it handy for future use. 

Setting up Build VM in Developer Cloud:

Now we have to create a VM Template and VM with the Docker software bundle for the execution of the build.

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button.

On creation of the template click on “Configure Software” button.

Select Docker from the list of software bundles available for configuration and click on the + sign to add it to the template. Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM(s) you want to create and select the VM Template you just created, which would be “DockerTemplate” for our blog.

 

Pushing Scripts to Git Repository on Oracle Developer Cloud:

Command_prompt:> cd <path to the NodeJS folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Below screen shots are for your reference.

 

Below is the folder structure description for the code that I have in the Git Repository on Oracle Developer Cloud Service.

Code in the Git Repository:

You will need to push the below 3 files in the Developer Cloud hosted Git repository which we have created.

Main.js

This is the main Node JavaScript code snippet which contains two simple methods, first one is to show the message and second one /add is for adding two numbers. The application listens at port 80. 

var express = require("express"); var bodyParser = require("body-parser"); var app = express(); app.use(bodyParser.urlencoded()); app.use(bodyParser.json()); var router = express.Router(); router.get('/',function(req,res){   res.json({"error" : false, "message" : "Hello Abhinav!"}); }); router.post('/add',function(req,res){   res.json({"error" : false, "message" : "success", "data" : req.body.num1 + req.body.num2}); }); app.use('/',router); app.listen(80,function(){   console.log("Listening at PORT 80"); })

Package.json

In this JSON code snippet we define the Node.js module dependencies. We also define the start file, which is Main.js for our project and the Name of the application.

{   "name": "NodeJSMicro",   "version": "0.0.1",   "scripts": {     "start": "node Main.js"   },   "dependencies": {     "body-parser": "^1.13.2",     "express": "^4.13.1"     } }

Dockerfile

This file will contains the commands to be executed to build the Docker container with the Node.js code. It starts by getting the Node.js version 6 Docker image, then adds the two files Main.js and package.json cloned from the Git repository. Run the npm install to download the dependencies in package.json file. Expose port 80 for Docker container. And finally start the application to listen on port 80.

 

FROM node:6 ADD Main.js ./ ADD package.json ./ RUN npm install EXPOSE 80 CMD [ "npm", "start" ]

Build Configuration:

Click on the “+ New Job” button and in the dialog which pops up, give the build job a name of your choice(for the purpose of this blog I have given this as “NodeJSMicroDockerBuild”) and then select the build template (DockerTemplate) from the dropdown, that we had created earlier in the blog. 

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository we created earlier in the blog, which is NodeJSDocker and the master branch to which we have pushed the code. You may select the checkbox to configure automatic build trigger on SCM commits.

Now from the Builders tab, select Docker Builder -> Docker Login. In the Docker login form you can leave the Registry host empty as we will be using Docker Hub which is the default Docker registry for Developer Cloud Docker Builder. You will have to provide the Docker Hub account username and password in the respective fields of the login form.

In the Builders tab, select Docker Builder -> Docker Build from the Add Builder dropdown. You can leave the Registry host empty as we are going to use Docker Hub which is the default registry. Now, you just need to give the Image name in the form that gets added and you are all done with the Build Job configuration. Click on Save to save the build job configuration.

Note: Image name should be in the format <Docker Hub user name>/<Image Name>

For this blog we can give the image name as - nodejsmicro

Then add Docker Push by selecting Docker Builder -> Docker Push from the Builders tab.Here you just need to mention the Image name, same as you have done in the Docker Build form to push the Docker Image build to the Docker Registry, which in this case is Docker Hub.

Once you execute the build, you will be able to see the build in the build queue.

Once the build gets executed the Docker Image that gets build is pushed to the Docker Registry which is Docker Hub for our blog. You can login into your Docker Hub account to see the Docker repository being created and the image being pushed to it, as seen in the screen shot below.

Now you can pull this image anywhere, then create and run the container, you will have your Node.js microservice code up and running.

 

You can go ahead and try many other Docker commands both using the out of the box Docker Builder functionality and also alternatively using the Shell Builder to run your Docker commands.

In the next blog, of the series, we will deploy this Node.js microservice container on a Kubernetes cluster in Oracle Kubernetes Engine.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

 

 

22nd June 2018 |

Arrgs. My Bot Doesn't Understand Me! Why Intent Resolutions Sometimes Appear to Be Misbehaving

Article by Grant Ronald, June 2018

One of the most common questions that gets asked when someone starts building a real bot is “Why am I getting strange intent resolutions”. For example, someone tests the bot with random key presses like “slkejfhlskjefhksljefh” and finds an 80% resolution for “CheckMyBalance”. The first reaction is to blame the intent resolution within the product. However, the reality is that you’ve not trained it to know any better. This short article gives a high level conceptual explanation of how model do and don’t work.

READ THE FULL ARTICLE

Related Content

TechExchange - First Step in Training Your Bot

 

22nd June 2018 |

A Practical Guide to Building Multi-Language Chatbots with the Oracle Bot Platform

Article by Frank Nimphius, Marcelo Jabali - June 2018

Chatbot support for multiple languages is a worldwide requirement. Almost every country has the need for supporting foreign languages, be it to support immigrants, refugees, tourists, or even employees crossing borders on a daily basis for their jobs.

According to the Linguistic Society of America1, as of 2009, 6,909 distinct languages were classified, a number that since then has been grown. Although no bot needs to support all languages, you can tell that for developers building multi-language bots, understanding natural language in multiple languages is a challenge, especially if the developer does not speak all of the languages he or she needs to implement support for.

This article explores Oracle's approach to multi language support in chatbots. It explains the tooling and practices for you to use and follow to build bots that understand and "speak" foreign languages.

Read the full article.

 

Related Content

TechExchange: A Simple Guide and Solution to Using Resource Bundles in Custom Components 

TechExchange - Custom Component Development in OMCe – Getting Up and Running Immediately

TechExchange - First Step in Training Your Bot

 

20th June 2018 |

API Monetization: What Developers Need to Know

You’ve no doubt heard the term “API monetization,” but do you really understand what it means? More importantly, do you understand what API monetization means for developers?

“The general availability of information and services has really influenced the way APIs behave and the way APIs are built,” says Oracle ACE and Developer Champion Arturo Viveros, principal architect at Sysco AS in Norway. “The hyper-distributed nature of the systems we work with, with cloud computing and with blockchain, and all of these technologies, makes it very important. Everyone wants to have information in real time now, as opposed to before when we could afford to create APIs that could give you a snapshot of what happened a few hours ago, or a day ago.”

These days the baseline consumer expectation is 24/7/365 service. “So, as a developer, when you’re designing APIs that are going to be exposed as business assets or as products, you need to take into account characteristics like high availability, performance resiliency, and flexibility,” says Viveros. “That’s why all of these new technologies go into supporting APIs, like microservices and containers and serverless. It's so critical to learn to use them because they allow you to be flexible to deploy new versions or improved versions of APIs. They allow your APIs to have an improved life cycle and to move away from the whole monolithic paradigm, reduce time to market, and move forward at the speed that the organization and your user base and consumer base require.”

So yeah, there’s a bit of a learning curve. But hasn’t that always been the developer’s reality? And hasn’t there always been some kind of reward at the end of the learning curve?

“It’s an exciting time for developers,” says Luis Weir. He’s an Oracle ACE Director, a Developer Champion, and the CTO of the Oracle Delivery Unit with Capgemini in the UK. “API monetization is an opportunity to add direct tangible value to the business. APIs have become a source of revenue on their own,” says Weir. “This is quite exciting. I don't think this is something that we’ve seen before in the IT industry. Whatever APIs we had in the past were in support of a business product, they were not the business product. That's different, and I think developers have the opportunity now to be completely, directly involved in the creation and maintenance of these products.”

While developing APIs is certainly important, it’s no less important to take advantage of what is already out there. “Developers within an organization need to be thinking about what APIs might be available to complete functions that are not within their core competency,” says Robert Wunderlich, product strategy director for Cloud, API, and Integration at Oracle. “There are a lot of publicly available APIs that can be used for low or no cost or a reasonable cost.”

[For example, check out the API Showcase on the NYC Developer Portal ]

Luis Weir sees another important aspect of API monetization. “As a developer it's always exciting to see how your product is received. For example, when you create an open source GitHub project and then all of a sudden you see a lot of people forking your project and trying to trace pull requests to contribute to it, that's exciting because that means that you did something that added to your organization or to the community. That's rewarding as a developer. It’s far more rewarding to see an IT asset that's directly influencing the direction of the business.” API monetization provides that visibility.

Arturo Viveros, Luis Weir, and Robert Wunderlich explore API monetization in depth from a developer perspective in this month’s Oracle Developer Community Podcast. Check it out!

The Panelists

In alphabetical order

Arturo Viveros
Oracle ACE
Oracle Developer Champion
Principal Architect, Sysco AS
Twitter LinkedIn Luis Weir
Oracle ACE Director
Oracle Developer Champion
CTO, Oracle Delivery Unit, Capgemini UK
Twitter LinkedIn Robert Wunderlich
Product Strategy Director for Cloud, API, and Integration, Oracle
Twitter LinkedIn  Additional Resources Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

 

19th June 2018 |

APIs to the Rescue in the Aftermath of 2017 Mexican Earthquake

After three weeks Hawaii's Kilauea volcano is still busy eating an island. Early in June Guatemala's Volcan De Fuego erupted and is still literally shaking the earth. And just this past weekend a 5.3 magnitude quake struck Osaka, Japan. Mother Earth knows how to get our attention. But in doing so she also triggers an impulse in some human beings to jump in and help in any way they can.

One great example of that kind of techie humanitarianism is the group of Mexican developers and DBAs who, in the immediate aftermath of the earthquake that hit Mexico in 2017, banded together in a collaborative effort to rapidly build a system to coordinate rescue and relief efforts.

Oracle ACE Rene Antunez was one of the volunteers in that effort. He shares the organizational and technical details in this video interview recorded at last week's ODTUG Kscope 2018 event in Orlando.

Given that natural disasters are likely to continue to happen, the open source project is ongoing, and is available on GItHub:

https://github.com/CodeandoMexico/terremoto-cdmx

Why not lend your skills to this worthwhile effort?

Have you been involved in similar humanitarian software development efforts? post a comment below

 

 

25th May 2018 |

Announcing Oracle APEX 18.1

Oracle Application Express (APEX) 18.1 is now generally available! APEX enables you to develop, design and deploy beautiful, responsive, data-driven desktop and mobile applications using only a browser. This release of APEX is a dramatic leap forward in both the ease of integration with remote data sources, and the easy inclusion of robust, high-quality application features.

Keeping up with the rapidly changing industry, APEX now makes it easier than ever to build attractive and scalable applications which integrate data from anywhere - within your Oracle database, from a remote Oracle database, or from any REST Service, all with no coding.  And the new APEX 18.1 enables you to quickly add higher-level features which are common to many applications - delivering a rich and powerful end-user experience without writing a line of code.

"Over a half million developers are building Oracle Database applications today using  Oracle Application Express (APEX).  Oracle APEX is a low code, high productivity app dev tool which combines rich declarative UI components with SQL data access.  With the new 18.1 release, Oracle APEX can now integrate data from REST services with data from SQL queries.  This new functionality is eagerly awaited by the APEX developer community", said Andy Mendelsohn, Executive Vice President of Database Server Technologies at Oracle Corporation.

 

Some of the major improvements to Oracle Application Express 18.1 include:

Application Features


It has always been easy to add components to an APEX application - a chart, a form, a report.  But in APEX 18.1, you now have the ability to add higher-level application features to your app, including access control, feedback, activity reporting, email reporting, dynamic user interface selection, and more.  In addition to the existing reporting and data visualization components, you can now create an application with a "cards" report interface, a dashboard, and a timeline report.  The result?  An easily-created powerful and rich application, all without writing a single line of code.

REST Enabled SQL Support


Oracle REST Data Services (ORDS) REST-Enabled SQL Services enables the execution of SQL in remote Oracle Databases, over HTTP and REST.  You can POST SQL statements to the service, and the service then runs the SQL statements against Oracle database and returns the result to the client in a JSON format.  

In APEX 18.1, you can build charts, reports, calendars, trees and even invoke processes against Oracle REST Data Services (ORDS)-provided REST Enabled SQL Services.  No longer is a database link necessary to include data from remote database objects in your APEX application - it can all be done seamlessly via REST Enabled SQL.

Web Source Modules


APEX now offers the ability to declaratively access data services from a variety of REST endpoints, including ordinary REST data feeds, REST Services from Oracle REST Data Services, and Oracle Cloud Applications REST Services.  In addition to supporting smart caching rules for remote REST data, APEX also offers the unique ability to directly manipulate the results of REST data sources using industry standard SQL.

REST Workshop


APEX includes a completely rearchitected REST Workshop, to assist in the creation of REST Services against your Oracle database objects.  The REST definitions are managed in a single repository, and the same definitions can be edited via the APEX REST Workshop, SQL Developer or via documented API's.  Users can exploit the data management skills they possess, such as writing SQL and PL/SQL to define RESTful API services for their database.   The new REST Workshop also includes the ability to generate Swagger documentation against your REST definitions, all with the click of a button.

Application Builder Improvements


In Oracle Application Express 18.1, wizards have been streamlined with smarter defaults and fewer steps, enabling developers to create components quicker than ever before.  There have also been a number of usability enhancements to Page Designer, including greater use of color and graphics on page elements, and "Sticky Filter" which is used to maintain a specific filter in the property editor.  These features are designed to enhance the overall developer experience and improve development productivity.  APEX Spotlight Search provides quick navigation and a unified search experience across the entire APEX interface.

Social Authentication


APEX 18.1 introduces a new native authentication scheme, Social Sign-In.  Developers can now easily create APEX applications which can use Oracle Identity Cloud Service, Google, Facebook, generic OpenID Connect and generic OAuth2 as the authentication method, all with no coding.

Charts


The data visualization engine of Oracle Application Express powered by Oracle JET (JavaScript Extension Toolkit), a modular open source toolkit based on modern JavaScript, CSS3 and HTML5 design and development principles.  The charts in APEX are fully HTML5 capable and work on any modern browser, regardless of platform, or screen size.  These charts provide numerous ways to visualize a data set, including bar, line, area, range, combination, scatter, bubble, polar, radar, pie, funnel, and stock charts.  APEX 18.1 features an upgraded Oracle JET 4.2 engine with updated charts and API's.  There are also new chart types including Gantt, Box-Plot and Pyramid, and better support for multi-series, sparse data sets.

Mobile UI


APEX 18.1 introduce many new UI components to assist in the creation of mobile applications.  Three new component types, ListView, Column Toggle and Reflow Report, are now components which can be used natively with the Universal Theme and are commonly used in mobile applications.  Additional enhancements have been made to the APEX Universal Theme which are mobile-focused, namely, mobile page headers and footers which will remain consistently displayed on mobile devices, and floating item label templates, which optimize the information presented on a mobile screen.  Lastly, APEX 18.1 also includes declarative support for touch-based dynamic actions, tap and double tap, press, swipe, and pan, supporting the creation of rich and functional mobile applications.

Font APEX


Font APEX is a collection of over 1,000 high-quality icons, many specifically created for use in business applications.  Font APEX in APEX 18.1 includes a new set of high-resolution 32 x 32 icons which include much greater detail and the correctly-sized font will automatically be selected for you, based upon where it is used in your APEX application.

Accessibility


APEX 18.1 includes a collection of tests in the APEX Advisor which can be used to identify common accessibility issues in an APEX application, including missing headers and titles, and more. This release also deprecates the accessibility modes, as a separate mode is no longer necessary to be accessible.

Upgrading


If you're an existing Oracle APEX customer, upgrading to APEX 18.1 is as simple as installing the latest version.  The APEX engine will automatically be upgraded and your existing applications will look and run exactly as they did in the earlier versions of APEX.  

 

"We believe that APEX-based PaaS solutions provide a complete platform for extending Oracle’s ERP Cloud. APEX 18.1 introduces two new features that make it a landmark release for our customers. REST Service Consumption gives us the ability to build APEX reports from REST services as if the data were in the local database. This makes embedding data from a REST service directly into an ERP Cloud page much simpler. REST enabled SQL allows us to incorporate data from any Cloud or on-premise Oracle database into our Applications. We can’t wait to introduce APEX 18.1 to our customers!", said Jon Dixon, co-founder of JMJ Cloud.

 

Additional Information


Application Express (APEX) is the low code rapid app dev platform which can run in any Oracle Database and is included with every Oracle Database Cloud Service.  APEX, combined with the Oracle Database, provides a fully integrated environment to build, deploy, maintain and monitor data-driven business applications that look great on mobile and desktop devices.  To learn more about Oracle Application Express, visit apex.oracle.com.  To learn more about Oracle Database Cloud, visit cloud.oracle.com/database

 

24th May 2018 |

Oracle Cloud Infrastructure CLI on Developer Cloud

With our May 2018 release of Oracle Developer Cloud, we have integrated Oracle Cloud Infrastructure command line interface (from here on, will be using OCIcli in the blog) as part of the build pipeline in Developer Cloud. This blog will help you understand how you can configure and execute OCIcli commands as part of the build pipeline, configured as part of the build job in Developer Cloud.

Configuring the Build VM Template for OCIcli

You will have to create a build VM with the OCIcli software bundle, to be able to execute the build with OCIcli commands. Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button.

On creation of the template click on “Configure Software” button.

Select OCIcli from the list of software bundles available for configuration and click on the + sign to add it to the template. You will also have to add the Python3.5 software bundle, which is a dependency for the OCIcli. Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “OCIcli” for our blog.

Build Job Configuration

Configure the Tenancy OCID as Build Parameter using String Parameter and give the name as per your wish. I have named it as "T" and have provided a default value to it, as shown in the screenshot below.

In the Builders tab Select OCIcli Builder and a Unix Shell builder in this sequence from the Add Builder drop down.

On adding the OCIcli Builder, you will see the form as below.

For the OCIcli Builder, you can get the parameters from the OCI console. Below screenshots would show where to get each of these form values from the OCI console.Below highlighted are in red boxes shows where you can get the Tenancy OCID and the region for the “Tenancy” and “Region” fields respectively in the OCIcli builder form.

For the “User OCID” and “Fingerprint” you need go to User Settings by clicking over the username drop down in the OCI console located at right hand side top. Please refer the screen shot below.

Please refer the links below for understanding the process of generating the Private Key and configuring the Public Key for the user in the OCI console.

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

In the Unix Shell Builder you can try out the below command:

oci iam compartment list -c $T

This command will list all the compartment in the Tenancy with OCID given to variable ‘T’ that we configured in the Build parameters tab as a String Parameter.

 

Post execution of the command, you can view the output on the console log. As shown below.

There are tons of other OCIcli commands that you can run as part of the build pipeline. Please refer this link for the same.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

23rd May 2018 |

Oracle Developer Cloud - New Continuous Integration Engine Deep Dive

We introduced our new Build Engine in Oracle Developer Cloud in our April release. This new build engine now comes with the capability to define build pipelines visually. Read more about it in my previous blog.

In this blog we will delve deeper into some of the functionalities of Build Pipeline feature of the new CI Engine in Oracle Developer Cloud.

Auto Start

Auto Start is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure the pipeline execution auto starts when one of the build job in the pipeline is executed externally, then that would trigger the execution of rest of the build jobs in the pipeline.

The below screen shot shows the pipeline for NodeJS application created on Oracle Developer Cloud Pipelines. The build jobs used in the pipeline are build-microservice, test-microservices and loadtest-microservice. And in parallel to the microservice build sequence we have, WiremockInstall and WiremockConfigure.

Scenarios When Auto Start is enabled for the Pipeline:

Scenario 1:

If we run build-microservice build job externally, then it will lead to the execution of the test-microservice and loadtest-microservice build jobs in that order subsequently. But note this does not trigger the execution of WiremockInstall or WiremockConfigure build jobs as they are part of a separate sequence. Please refer the screen shot below, which shows only the build jobs executed in green.

Scenario 2:

If we run test-microservice build job externally, then it will lead to the execution of the loadtest-microservice build job only. Please refer the screen shot below, which shows only the build jobs executed in green.

Scenario 3:

If we run loadtest-microservice build job externally, then it will lead to no other build job execution in the pipeline across both the build sequences.

Exclusive Build

This enables the users to disallow the pipeline jobs to be built externally in parallel to the execution of the build pipeline. It is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure that the execution of build jobs in pipeline will not be allowed to be built in parallel to the pipeline execution.

When you run the pipeline you would see the build jobs queued for execution which you can see in the Build History. In this case you would see two build jobs queued, one would be build-micorservice and other would be WiremockInstall as they are parallel sequences part of the same pipeline.

Now if you try to run any of the build jobs in the pipeline, for example; like test-microservice, you will be given an error message, as shown in the screenshot below.

 

Pipeline Instances:

If you click the Build Pipeline name link in the Pipelines tab you will be able to see the pipeline instances. Pipeline instance is the instance at which it was executed. 

Below screen shot shows the pipeline instances with time stamp of when it was executed. It will show if the pipeline got Auto Started (hover on the status icon of the pipeline instance) due to an external execution of the build job or shows the success status if all the build jobs of the pipeline were build successfully. It also shows the build jobs that executed successfully in green for that particular pipeline instance. The build jobs that did not get executed have a white background.  You also get an option to cancel while the pipeline is getting executed and you may choose to delete the instance post execution of the pipeline.

 

Conditional Build:

The visual build pipeline editor in Oracle Developer Cloud has a feature to support conditional builds. You will have to double click the link connecting the two build jobs and select any one of the conditions as given below:

Successful: To proceed to the next build job in the sequence if the previous one was a success.

Failed: To proceed to the next build job in the sequence if the previous one failed.

Test Failed: To proceed to the next build job in the sequence if the test failed in the previous build job in the pipeline.

 

Fork and Join:

Scenario 1: Fork

In this scenario if you have a build job like build-microservice on which the other three build jobs, “DockerBuild” which builds a deployable Docker image for the code, “terraformBuild” which builds the instance on Oracle Cloud Infrastructure and deploy the code artifact and “ArtifactoryUpload” build job to upload the generated artifact to Artifactory are dependent on then you will be able to fork the build jobs as shown below.

 

Scenario 2: Join

If you have a build job test-microservice which is dependent on two other build jobs, build-microservice which build and deploys the application and another build job WiremockConfigure to configure the service stub, then in this case you need to create a join in the pipeline as shown in the screen shot below.

 

You can refer the Build Pipeline documentation here.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

16th May 2018 |

Pizza, Beer, and Dev Expertise at Your Local Meet-up

Big developer conferences are great places to learn about new trends and technologies, attend technical sessions, and connect with colleagues. But by virtue of their size, their typical location in destination cities, and multi-day schedules, they can require a lot of planning, expense, and time away from work.

Meet-ups, offer a fantastic alternative. They’re easily accessible local events, generally lasting a couple of hours. Meet-ups offer a more human scale and are far less crowded than big conferences, with a far more casual, informal atmosphere that can be much more conducive to learning through Q&A and hands-on activities.

One big meet-up advantage is that by virtue of their smaller scale they can be scheduled more frequently. For example, while Oracle ACE Associate Jon Petter Hjulsted and his colleagues attend the annual Oracle User Group Norway (OUGN) Conference, they wanted to get together more often, three or four times a year. The result is a series of OUGN Integration meet-ups “where we can meet people who work on the same things.” As of this podcast two meet-ups have already taken place, with third schedule for the end of May.

Luis Weir, CTO at Capgemini in the UK and an Oracle ACE Director and Developer Champion, felt a similar motivation. “There's so many events going on and there's so many places where developers can go,” Luis says. But sometimes developers want a more relaxed, informal, more approachable atmosphere in which to exchange knowledge. Working with his colleague Phil Wilkins, senior consultant at Capgemini and an Oracle ACE, Luis set out to organize a series of meet-ups that offered more “cool.”

Phil’s goal in the effort was to organize smaller events that were “a little less formal, and a bit more convenient.” Bigger, longer events are more difficult to attend because they require more planning on the part of attendees. “It can take quite a bit of effort to organize your day if you’re going to be out for a whole day to attend a user group special interest group event,” Phil says. But local events scheduled in the evening require much less planning in order to attend. “It's great! You can get out and attend these things and you get to talk to people just as much as you would at a during a day-time event.”

For Oracle ACE Ruben Rodriguez Santiago, a Java, ADF, and cloud solution specialist with Avanttic in Spain, the need for meet-ups arose out of a dearth of events focused on Oracle technologies. And those that were available were limited to database and SaaS. “So for me this was a way to get moving and create events for developers,” Ruben says.

What steps did these meet-up organizers take? What insight have they gained along the way as they continue to organize and schedule meet-up events? You’ll learn all that and more in this podcast. Listen!

 

The Panelists Jon-Petter Hjulstad
Department Manager, SYSCO AS
Twitter LinkedIn   
Ruben Rodriguez Santiago
Java, ADF, and Cloud Solution Specialist, Avanttic
Twitter LinkedIn  
Luis Weir
CTO, Oracle DU, Capgemini
Twitter LinkedIn  
Phil Wilkins
Senior Consultant, Capgemini
Twitter LinkedIn  Additional Resources Coming Soon
  • What Developers Need to Know About API Monetization
  • Best Practices for API Development
Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

 

 

9th May 2018 |

Build Oracle Cloud Infrastructure custom Images with Packer on Oracle Developer Cloud

In the April release of Oracle Developer Cloud Service we started supporting Docker and HashiCorp Terraform builds as part of the CI & CD pipeline.  HashiCorp Terraform helps you provision Oracle Cloud Infrastructure instance as part of the build pipeline. But what if you want to provision the instance using a custom image instead of the base image? You need a tool like  HashiCorp Packer to script your way into building images. So with Docker build support we can now build Packer based images as part of build pipeline in Oracle Developer Cloud. This blog will help you to understand how you can use Docker and Packer together on Developer Cloud to create custom images on Oracle Cloud Infrastructure.

About HashiCorp Packer

HashiCorp Packer automates the creation of any type of machine image. It embraces modern configuration management by encouraging to use automated scripts to install and configure the software within your Packer-made images. Packer brings machine images into the modern age, unlocking untapped potential and opening new opportunities.

You can read more about HashiCorp Packer on https://www.packer.io/

You can find the details of HashiCorp Packer support for Oracle Cloud Infrastructure here.

Tools and Platforms Used

Below are the tools and cloud platforms I use for this blog:

Oracle Developer Cloud Service: The DevOps platform to build your Ci & CD pipeline.

Oracle Cloud Infrastructure: IaaS platform where we would build the image which can be used for provisioning.

Packer: Tool for creating custom images on cloud. We would be doing for Oracle Cloud Infrastructure or OCI it is popularly known as. For this blog I would mostly be using OCI here on.

Packer Scripts

To execute the Packer scripts on the Oracle Developer Cloud as part of the build pipeline, you need to upload 3 files to the Git repository. To upload the scripts to the Git repository, you will need to first install the Git cli on your machine and then use the below commands to upload the code:

I was using windows machine for the script development, so below is what you need to do on the command line:

Pushing Scripts to Git Repository on Oracle Developer Cloud

Command_prompt:> cd <path to the Terraform script folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Note: Ensure that the Git repository is created and you have the HTTPS URL for it.

Below is the folder structure description for the scripts that I have in the Git Repository on Oracle Developer Cloud Service.

Description of the files:

oci_api_key.pem – This is the file required for the OCI access. It contains the SSH private key.

Note: Please refer to the links below for details on OCI key. You will also need the SSH public key to be there

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

 

build.json: This is the only configuration file that you need for Packer. This JSON file contains all the definitions needed for Packer to create an image on Oracle Cloud Infrastructure. I have truncated the ocids and fingerprint for security reasons.

 

{ "builders": [ { "user_ocid":"ocid1.user.oc1..aaaaaaaa", "tenancy_ocid": "ocid1.tenancy.oc1..aaaaaaaay", "fingerprint":"29:b1:8b:e4:7a:92:ae", "key_file":"oci_api_key.pem", "availability_domain": "PILZ:PHX-AD-1", "region": "us-phoenix-1", "base_image_ocid": "ocid1.image.oc1.phx.aaaaaaaal", "compartment_ocid": "ocid1.compartment.oc1..aaaaaaaahd", "image_name": "RedisOCI", "shape": "VM.Standard1.1", "ssh_username": "ubuntu", "ssh_password": "welcome1", "subnet_ocid": "ocid1.subnet.oc1.phx.aaaaaaaa", "type": "oracle-oci" } ], "provisioners": [ { "type": "shell", "inline": [ "sleep 30", "sudo apt-get update", "sudo apt-get install -y redis-server" ] } ] }

You can give values of your choice for image_name and it is recommended but optional to provide ssh_password. While I have kept ssh_username as “Ubuntu” as my base image OS was Ubuntu. Leave the type and shape as is. The base_image ocid would depend on the region. Different region have different ocid for the base images. Please refer link below to find the ocid for the image as per region.

https://docs.us-phoenix-1.oraclecloud.com/images/

Now login into your OCI console to retrieve some of the details needed for the build.json definitions.

Below screenshot shows where you can retrieve your tenancy_ocid from.

Below screenshot of OCI console shows where you will find the compartment_ocid.

Below screenshot of OCI console shows where you will find the user_ocid.

You can retrieve the region and availability_domain as shown below.

Now select the compartment, which is “packerTest” for this blog, then click on the networking tab and then the VCN you have created. Here you would see a subnet each for the availability_domains. Copy the ocid for the subnet with respect to the availability_domain you have chosen.

Dockerfile: This will install Packer in Docker and run the Packer command to create a custom image on OCI. It pulls the packer:full image, then adds the build.json and oci_api_key.pem files the Docker image and then execute the packer build command.

 

FROM hashicorp/packer:full ADD build.json ./ ADD oci_api_key.pem ./ RUN packer build build.json

 

Configuring the Build VM

With our latest release, you will have to create a build VM with the Docker software bundle, to be able to execute the build for Packer, as we are using Docker to install and run Packer.

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button.

On creation of the template click on “Configure Software” button.

Select Docker from the list of software bundles available for configuration and click on the + sign to add it to the template. Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “DockerTemplate” for our blog.

 

Build Job Configuration

Click on the “+ New Job” button and in the dialog which pops up, give the build job a name of your choice and then select the build template (DockerTemplate) from the dropdown, that we had created earlier in the blog. 

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository and the branch that you have selected. You may select the checkbox to configure automatic build trigger on SCM commits.

In the Builders tab Docker Builder -> Docker Build from the Add Builder dropdown. You just need to give the Image name in the form that gets added and you are all done with the Build Job configuration. Now Click on Save to save the build job configuration.

On execution of the build job, the image gets created in the OCI instance in the defined compartment as shown in the below screenshot.

So now you can easily automate custom image creation on Oracle Cloud Infrastructure using Packer as part of your continuous integration & continuous delivery pipeline on Oracle Developer Cloud.

Happy Packing!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

9th May 2018 |

Infrastructure as Code using Terraform on Oracle Developer Cloud

With our April release, we have started supporting HashiCorp Terraform builds in Oracle Developer Cloud. This blog will help you understand how you can use HashiCorp Terraform in build pipeline to provision Oracle Cloud Infrastructure as part of the build pipeline automation. 

 

Tools and Platforms Used

Below are the tools and cloud platforms I use for this blog:

Oracle Developer Cloud Service: The DevOps platform to build your CI & CD pipeline.

Oracle Cloud Infrastructure: IaaS platform where we would provision the infrastructure for our usage.

Terraform: Tool for provisioning the infrastructure on cloud. We would be doing for Oracle Cloud Infrastructure or OCI it is popularly known as. For this blog I would be using OCI here on.

 

About HashiCorp Terraform

HashiCorp Terraform is a tool which helps you to write, plan and create your infrastructure safely and efficiently. It can manage existing and popular service providers like Oracle, as well as custom in-house solutions. Configuration files describe to HashiCorp Terraform the components needed to run a single application or your entire datacenter. It helps you to build, manage and version your code. To know more about HashiCorp Terraform go to: https://www.terraform.io/

 

Terraform Scripts

To execute the Terraform scripts on the Oracle Developer Cloud as part of the build pipeline, you need to upload all the scripts to the Git repository. To upload the scripts to the Git repository, you will need to first install the Git cli on your machine and then use the below commands to upload the code:

I was using windows machine for the script development so below is what you need to do on the command line:

Pushing Scripts to Git Repository on Oracle Developer Cloud

Command_prompt:> cd <path to the Terraform script folder>

Command_prompt:>git init

Command_prompt:>git add –all

Command_prompt:>git commit –m “<some commit message>”

Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL>

Command_prompt:>git push origin master

Below is the folder structure description for the terraform scripts that I have in the Git Repository on Oracle Developer Cloud Service.

The terraform scripts are inside the exampleTerraform folder and the oci_api_key_public.pem and oci_api_key.pem are the OCI keys.

In the exampleTerraform folder we have all the “tf” extension files along with the env-vars file. You will be able to see the definition of the files later in the blog.

In the “userdata” folder you will have the bootstrap shell script which will be executed when the VM first boots up on OCI.

Below is the description of each file in the folder and the snippet:

env-vars: It is the most important file where we set all the environment variables which will be used by the Terraform scripts for accessing and provisioning the OCI instance.

### Authentication details export TF_VAR_tenancy_ocid="ocid1.tenancy.oc1..aaaaaaaa" export TF_VAR_user_ocid="ocid1.user.oc1..aaaaaaa" export TF_VAR_fingerprint="29:b1:8b:e4:7a:92:ae:d5" export TF_VAR_private_key_path="/home/builder/.terraform.d/oci_api_key.pem" ### Region export TF_VAR_region="us-phoenix-1" ### Compartment ocid export TF_VAR_compartment_ocid="ocid1.tenancy.oc1..aaaa" ### Public/private keys used on the instance export TF_VAR_ssh_public_key=$(cat exampleTerraform/id_rsa.pub) export TF_VAR_ssh_private_key=$(cat exampleTerraform/id_rsa)

Note: all the ocids above are truncated for security and brevity.

Below screenshot(s) of the OCI console shows where to locate these OCIDS:

tenancy_ocid and region

compartment_ocid:

user_ocid:

Point to the path of the RSA files for the SSH connection which are there in the Git repository and the OCI API Key private pem file in the Git repository.

variables.tf: In this file we initialize the terraform variables along with configuring the Instance Image OCID. This could be the ocid for base image available out of the box on OCI instance. These may vary based on the region where your OCI instance has been provisioned. Use this link for knowing more about the OCI base images. Here we also configure the path for the bootstrap file which resides in the userdata folder, which will be executed on boot of the OCI machine.

variable "tenancy_ocid" {} variable "user_ocid" {} variable "fingerprint" {} variable "private_key_path" {} variable "region" {} variable "compartment_ocid" {} variable "ssh_public_key" {} variable "ssh_private_key" {} # Choose an Availability Domain variable "AD" { default = "1" } variable "InstanceShape" { default = "VM.Standard1.2" } variable "InstanceImageOCID" { type = "map" default = { // Oracle-provided image "Oracle-Linux-7.4-2017.12.18-0" // See https://docs.us-phoenix-1.oraclecloud.com/Content/Resources/Assets/OracleProvidedImageOCIDs.pdf us-phoenix-1 = "ocid1.image.oc1.phx.aaaaaaaa3av7orpsxid6zdpdbreagknmalnt4jge4ixi25cwxx324v6bxt5q" //us-ashburn-1 = "ocid1.image.oc1.iad.aaaaaaaaxrqeombwty6jyqgk3fraczdd63bv66xgfsqka4ktr7c57awr3p5a" //eu-frankfurt-1 = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaayxmzu6n5hsntq4wlffpb4h6qh6z3uskpbm5v3v4egqlqvwicfbyq" } } variable "DBSize" { default = "50" // size in GBs } variable "BootStrapFile" { default = "./userdata/bootstrap" }

compute.tf: The display name, compartment ocid, image to be used and the shape and the network parameters need to be configured here , as shown in the code snippet below.

 

resource "oci_core_instance" "TFInstance" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" compartment_id = "${var.compartment_ocid}" display_name = "TFInstance" image = "${var.InstanceImageOCID[var.region]}" shape = "${var.InstanceShape}" create_vnic_details { subnet_id = "${oci_core_subnet.ExampleSubnet.id}" display_name = "primaryvnic" assign_public_ip = true hostname_label = "tfexampleinstance" }, metadata { ssh_authorized_keys = "${var.ssh_public_key}" } timeouts { create = "60m" } }

network.tf: Here we have the Terraform script for creating VCN, Subnet, Internet Gateway and Route table. These are vital for the creation and access of the compute instance that we provision.

resource "oci_core_virtual_network" "ExampleVCN" { cidr_block = "10.1.0.0/16" compartment_id = "${var.compartment_ocid}" display_name = "TFExampleVCN" dns_label = "tfexamplevcn" } resource "oci_core_subnet" "ExampleSubnet" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" cidr_block = "10.1.20.0/24" display_name = "TFExampleSubnet" dns_label = "tfexamplesubnet" security_list_ids = ["${oci_core_virtual_network.ExampleVCN.default_security_list_id}"] compartment_id = "${var.compartment_ocid}" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" route_table_id = "${oci_core_route_table.ExampleRT.id}" dhcp_options_id = "${oci_core_virtual_network.ExampleVCN.default_dhcp_options_id}" } resource "oci_core_internet_gateway" "ExampleIG" { compartment_id = "${var.compartment_ocid}" display_name = "TFExampleIG" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" } resource "oci_core_route_table" "ExampleRT" { compartment_id = "${var.compartment_ocid}" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" display_name = "TFExampleRouteTable" route_rules { cidr_block = "0.0.0.0/0" network_entity_id = "${oci_core_internet_gateway.ExampleIG.id}" } }

block.tf: The below script defines the boot volumes for the compute instance getting provisioned.

resource "oci_core_volume" "TFBlock0" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" compartment_id = "${var.compartment_ocid}" display_name = "TFBlock0" size_in_gbs = "${var.DBSize}" } resource "oci_core_volume_attachment" "TFBlock0Attach" { attachment_type = "iscsi" compartment_id = "${var.compartment_ocid}" instance_id = "${oci_core_instance.TFInstance.id}" volume_id = "${oci_core_volume.TFBlock0.id}" }

provider.tf: In the provider script the OCI details are set.

 

provider "oci" { tenancy_ocid = "${var.tenancy_ocid}" user_ocid = "${var.user_ocid}" fingerprint = "${var.fingerprint}" private_key_path = "${var.private_key_path}" region = "${var.region}" disable_auto_retries = "true" }

datasources.tf: Defines the data sources used in the configuration

# Gets a list of Availability Domains data "oci_identity_availability_domains" "ADs" { compartment_id = "${var.tenancy_ocid}" } # Gets a list of vNIC attachments on the instance data "oci_core_vnic_attachments" "InstanceVnics" { compartment_id = "${var.compartment_ocid}" availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" instance_id = "${oci_core_instance.TFInstance.id}" } # Gets the OCID of the first (default) vNIC data "oci_core_vnic" "InstanceVnic" { vnic_id = "${lookup(data.oci_core_vnic_attachments.InstanceVnics.vnic_attachments[0],"vnic_id")}" }

outputs.tf: It defines the output of the configuration, which is public and private IP of the provisioned instance.

# Output the private and public IPs of the instance output "InstancePrivateIP" { value = ["${data.oci_core_vnic.InstanceVnic.private_ip_address}"] } output "InstancePublicIP" { value = ["${data.oci_core_vnic.InstanceVnic.public_ip_address}"] }

remote-exec.tf: Uses a null_resource, remote-exec and depends on to execute a command on the instance.

resource "null_resource" "remote-exec" { depends_on = ["oci_core_instance.TFInstance","oci_core_volume_attachment.TFBlock0Attach"] provisioner "remote-exec" { connection { agent = false timeout = "30m" host = "${data.oci_core_vnic.InstanceVnic.public_ip_address}" user = "ubuntu" private_key = "${var.ssh_private_key}" } inline = [ "touch ~/IMadeAFile.Right.Here", "sudo iscsiadm -m node -o new -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port}", "sudo iscsiadm -m node -o update -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -n node.startup -v automatic", "echo sudo iscsiadm -m node -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port} -l >> ~/.bashrc" ] } }

Oracle Infrastructure Cloud - Configuration

The major configuration that need to be done on OCI is for the security for Terraform to be able work and provision an instance.

Click the username on top of the Oracle Cloud Infrastructure console, you will see a drop down, select User Settings from it.

Now click on the “Add Public Key” button, to get the dialog where you can copy paste the oci_api_key.pem(the key) in it and click on the Add button.

Note: Please refer to the links below for details on OCI key.

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How

https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3

 

Configuring the Build VM

Click on the user drop down on the right hand top of the page. Select “Organization” from the menu.

Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”.

On creation of the template click on “Configure Software” button.

Select Terraform from the list of software bundles available for configuration and click on the + sign to add it to the template.

Then click on “Done” to complete the Software configuration.

Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “terraformTemplate” for our blog.

Build Job Configuration

As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository and the branch that you have selected. You may select the checkbox to configure automatic build trigger on SCM commits.

Select the Unix Shell Builder form the Add Builder dropdown. Then add the script as below. The below script would first configure the environment variables using env-vars. Then copy the oci_api_key.pem and oci_api_key_public.pem to the specified directory. Then execute the Terraform commands to provision the OCI instance. The important commands are terraform init, terraform plan and terraform apply.

terraform init – The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.

terraform plan – The terraform plan command is used to create an execution plan. 

terraform apply – The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

Post the execution it prints the IP addresses of the provisioned instance as output. And then tries to make a SSH connection to the machine using the RSA keys supplied in the exampleTerraform folder.

Configure Artifact Archiver to archive the terraform.tfstate file which would get generated as part of the build execution. You may select the compression to GZIP or NONE.

Post Build Job Execution

In build log you will be able to see the private and public IP addresses for the instance provisioned by Terraform scripts and then try to make an SSH connection to it. If everything goes fine, you the build job should complete successfully. 

Now you can go to the Oracle Cloud Infrastructure console to see the instance has already being created for you along with network and boot volumes as defined in the Terraform scripts.  

So now you can easily automate provisioning of Oracle Cloud Infrastructure using Terraform as part of your continuous integration & continuous delivery pipeline on Oracle Developer Cloud.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

8th May 2018 |

Developer Cloud Service May Release Adds K8S, OCI, Code Editing and More

Just a month after the recent release of Oracle Developer Cloud Service - that added support for pipelines, Docker, and Terraform - we are happy to announce another update to the services that adds even more option to help you extend your DevOps and CI/CD processes to support additional use cases.

Here are some highlights of the new version:

Extended build server software

You can now create build jobs and pipelines that leverage:

  • Kubernetese - use the kubectl command line to manage your docker containers
  • OCI Command line - to automate provisioning and configuration of Oracle Compute 
  • Java 9 - for your latest java projects deployments
  • Oracle Development Tools - Oracle Forms and Oracle JDeveloper 12.2.3 are now available to automate deployment of Forms and ADF apps

 

Build Server Software Options SSH Connection in Build

You can now define SSH connection as part of your build configuration to allow you to securely connect and execute shell scripts on Oracle Cloud Services.

In Browser Code Editing and Versioning 

A new "pencil" icon let's you edit code in your private git repositories hosted in Developer Cloud Service directly in your browser. Once you edited the code you can commit the changes to your branch directly providing commit messages.

Code editing in the browser

PagerDuty Webhook

Continuing our principle of keeping the environment open we add a new webhook support to allow you to send events to the popular PagerDuty solution.

Increased Reusability

We are making it easier to replicate things that already work for your team. For example, you can now create a new project based on an existing project you exported. You can copy an agile board over to a new one. If you created a useful issue search - you can share it with others in your team.

There are many other feature that will improve your daily work, have a look at the what's new in DevCS document for more information.

Happy development!