Oracle Blogs | Oracle Developers Blog

 

20th March 2019 |

New Features in Oracle Visual Builder - PWA, Components Catalog, and Much More

 

20th March 2019 |

Podcast: Polyglot Programming and GraalVM

How many programming languages are there? I won’t venture a guess. There must be dozens, if not hundreds. The 2018 State of the Octoverse Report from Github identified the following as the top ten most popular languages among GitHub contributors:

  1. JavaScript
  2. Java
  3. Python
  4. PHP
  5. C++
  1. C#
  2. TypeScript
  3. Shell
  4. C
  5. Ruby

So the word “polyglot” definitely describes the world of the software coder.

Polyglot programming is certainly nothing new, but as the number of languages grows, and as language preferences among coders continue to evolve, what happens to decisions about which language to use in a particular project? In this program we'll explore the meaning and evolution of polyglot programming, examine the benefits and challenges of mixing and matching different languages, and then discuss the GraalVM project and its impact on polyglot programming.

This is Oracle Groundbreakers Podcast #364. It was recorded on Monday February 11, 2019. Time to listen...

The Panelists Listed alphabetically Roberto Cortez Roberto Cortez
Java Champion
Founder and Organizer, JNation
Twitter LinkedInJava Champion Dr. Chris Seaton Dr. Chris Seaton, PhD
Research Manager, Virtual Machine Group, Oracle Labs
Twitter LinkedIn Oleg Selajev Oleg Selajev
Lead Developer Advocate, GraalVM, Oracle Labs
Twitter LinkedIn  Additional Resources Coming Soon
  • Dmitry Kornilov, Tomas Langer, Jose Rodriguez, and Phil Wilkins discuss the ins, outs, and practical applications of Helidon, the lightweight Java microservices framework.
  • What's Up with Serverless? A panel discussion of where Serverless fits in the IT landscape.
  • Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov discuss DevOps, streaming, liquid software, and observability in this podcast captured during Oracle Code One 2018
Subscribe Never miss an episode! The Oracle Groundbreakers Podcast is available via: Participate

If you have a topic suggestion for the Oracle Groundbreakers Podcast, or if you are interested in participating as a panelists, please post a comment. We'll get back to you right away.

 

15th March 2019 |

Enterprise applications meet cloud native

Speaking with Enterprise customers, many are adopting a cloud-native strategy for new, in-house development projects. This approach of short development cycles, iterative functional delivery and automated CI/CD tooling is allowing them to deliver innovation for users and customers quicker than ever before. One of Oracle’s top 10 predictions for developers in 2019 is that legacy, enterprise applications jump to cloud-native development approaches.

The need to move to cloud-native is seated in the fact that, at heart, all companies are software companies. Those that can use software to their advantage, to speed, automate their business and make it easier for their customers to interact with them, win.  This is the nature of business today, and the reason that start-ups, such as Uber, can disrupt whole existing industries.

Cloud native technologies like Kubernetes, Docker containers, micro-services and functions provide the basis to scale, secure and enable these new solutions. 

However, enterprises typically have a complex stack of applications and infrastructure; this usually means monolithic custom or ISV applications that are anything but cloud-native. These new cloud-native solutions need to be able to interact with these legacy systems but are running in the cloud rather an on-premises and need delivery cycles of days rather than months. Enterprises need to address this technical debt in order to realise the full benefits of a cloud-native approach. Re-writing these monoliths is not practical in the short-term due to resource and time needed. So, what are the options to modernise enterprise applications?

Move the Monolith

Moving these applications to the cloud can realise the cloud economics of elasticity and pay for what you use. This thinks of infrastructure as code rather than physical compute, network and storage. Using tools such as Terraform – https://www.terraform.io – to create and delete infrastructure resources and Packer – https://www.packer.io – to manage machine images, means we can create environment when needed and tear down when not. Although this does not immediately address modernisation of the application itself, it does start to automate the infrastructure and begin to integrate them into cloud native development and delivery. https://blogs.oracle.com/developers/build-oracle-cloud-infrastructure-custom-images-with-packer-on-oracle-developer-cloud

Containerise and Orchestrate 

A cloud native strategy is largely based on running applications in Docker containers to give the flexibility of deployment on premises and across different cloud providers. A common approach is to containerise existing applications and run them on premises before moving to the cloud. 

Many enterprise applications, both in-house developed and ISV supplied, are Weblogic based and enterprises are looking to do the same with these. Weblogic now runs in docker containers, so the same approach can be taken – https://hub.docker.com/_/oracle-weblogic-server-12c.   

As initial, and suitable workloads (workloads that have less on-prem intergration points, or are good candidates from a compliance standpoint) become containerised and moved to the cloud, the management and orchestration of containers into solutions begins to become an issue. Container management or orchestration platforms such as Kubernetes, Docker Swarm etc are being adopted. Kubernetes is emerging as the platform of choice for enterprises to manage containers in the cloud. Oracle has developed a Weblogic Kubernetes operator that allows Kubernetes to understand and manage Weblogic domains, clustering, etc. https://github.com/oracle/weblogic-kubernetes-operator

Integrating with version control like Git Hub, secure docker repositories and using CI/CD tooling to deploy to Kubernetes, really brings these enterprise applications to the core of a cloud native strategy. It also means existing Weblogic and Java skills in the organisation continue to be relevant in the cloud. 

Breaking It Down

To fully benefit from running these applications in the could, the functionality needs to be integrated with the new cloud native services and also to become more agile. An evolving pattern is to take an agile approach, taking a series of iterations to refactoring the enterprise application. A first step is to separate the UI from the functional code and create API’s to access the business functionality. This will allow new cloud native applications access to the required functionality and facilitate the shorter delivery cycles enterprises are demanding. Over time, these services can be rebuilt and deployed as cloud services, eventually migrate away from the legacy application. Helidon is a collection of java libraries for writing microservices that helps to re-use existing java skills to re-developing the code behind the services. 

As more and more services are deployed management, versioning and monitoring become increasingly important. Using a tool like a service mesh is evolving as the way to do this. A service mesh is a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application. Istio is evolving as an enterprise choice and can easily be installed on Kubernetes. 

In Conclusion

More and more enterprises are adopting a cloud native approach for new development projects. They are also struggling with the technical debt of large monolithic enterprise applications when trying to modernise them. However, there are a number of strategies and technologies that be used to help migrate and modernise these legacy applications in the cloud. With the right approach existing skills can be maintained and evolved into a container based, cloud native environment.  

 

14th March 2019 |

Kata Containers: An Important Cloud Native Development Trend
Introduction

One of Oracle’s top 10 predictions for developers in 2019 was that a hybrid model that falls between virtual machines and containers will rise in popularity for deploying applications.

Kata Containers are a relatively new technology that combine the speed of development and deployment of (Docker) containers with the isolation of virtual machines. In the Oracle Linux and virtualization team we have been investigating Kata Containers and have recently released Oracle Container Runtime for Kata on Oracle Linux yum server for anyone to experiment with. In this post, I describe what Kata containers are as well as some of the history behind this significant development in the cloud native landscape. For now, I will limit the discussion to Kata as containers in a container engine. Stay tuned for a future post on the topic of Kata Containers running in Kubernetes.

History of Containerization in Linux

The history of isolation, sharing of resources and virtualization in Linux and in computing in general is rich and deep. I will skip over much of this history to focus on some of the key landmarks on the way there. Two Linux kernel features are instrumental building blocks for the Docker Containers we’ve become so familiar with: namespaces and cgroups.

Linux namespaces are a way to partition kernel resources such that two different processes have their own view of resources such as process IDs, file names or network devices. Namespaces determine what system resources you can see.

Control Groups or cgroups are a kernel feature that enable processes to be grouped hierarchically such that their use of subsystem resources (memory, CPU, I/O, etc) can be monitored and limited. Cgroups determine what system resources your can use.

One of the earliest containerization features available in Linux combine both namespaces and cgroups was Linux Containers (LXC). LXC offered a userspace interface to make the Linux kernel containment features easy to use and enabled the creation of system or application containers. Using LXC, you could run, for example, CentOS 6 and Oracle Linux 7, two completely different operating systems with different userspace libraries and versions on the same Linux kernel.

Docker expanded on this idea of lightweight containers by adding packagaging, versioning and component reuse features. Docker Containers have become widely used because they appealed to developers. They shortened the build-test-deploy cycle because they made it easier to package and distribute an application or service as a self-contained unit, together with all the libraries needed to run it. Their popularity also stems from the fact that they appeal to developers and operators alike. Essentially, Docker Containers bridge the gap between dev and ops and shorten the cycle from development to deployment.

Because containers —both LXC and Docker-based— share the same underlying kernel, it’s not inconceivable that an exploit able to escape a container could access kernel resources or even other containers. Especially in multi-tenant environments, this is something you want to avoid.

Projects like Intel® Clear Containers Hyper runV took a different approach to parceling out system resources: their goal was to combine the strong isolation of VMs with the speed and density (the number of containers you can pack onto a server) of containers. Rather than relying on namespaces and cgroups, they used a hypervisor to run a container image.

Intel® Clear Linux OS Containers and Hyper runV came together in Kata Containers, an open source project and community, which saw its first release in March of 2018.

Kata Containers: Best of Both Worlds

The fact that Kata Containers are lightweight VMs means that, unlike traditional Linux containers or Docker Containers, Kata Containers don’t share the same underlying Linux kernel. Kata Containers fit into the existing container ecosystem because developers and operators interact with them through a container runtime that adheres to the Open Container Initiative (OCI)specification. Creating, starting, stopping and deleting containers works just the way it does for Docker Containers.

Image by OpenStack Foundation licensed under CC BY-ND 4.0

In summary, Kata Containers:

  • Run their own lightweight OS and a dedicated kernel, offering memory, I/O and network isolation
  • Can use hardware virtualization extensions (VT) for additional isolation
  • Comply with the OCI (Open Container Initiative) specification as well as CRI (Container Runtime Interface) for Kubernetes
Installing Oracle Container Runtime for Kata

As I mentioned earlier, we’ve been researching Kata Containers here in the Oracle Linux team and as part of that effort we have released software for customers to expermiment with. The packages are available on Oracle Linux yum server and its mirrors in Oracle Cloud Infrastructure (OCI). Specifically, we’ve released a kata-runtime and related compontents, as well an optimized Oracle Linux guest kernel and guest image used to boot the virtual machine that will run a container.

Oracle Container Runtime for Kata relies on QEMU and KVM as the hypervisor to launch VMs. To install Oracle Container Runtime for Kata on a bare metal compute instance on OCI:

Install QEMU

Qemu is available in the ol7_kvm_utils repo. Enable that repo and install qemu

sudo yum-config-manager --enable ol7_kvm_utils sudo yum install qemu Install and Enable Docker

Next, install and enable Docker.

sudo yum install docker-engine sudo systemctl start docker sudo systemctl enable docker Install kata-runtime and Configure Docker to Use It

First, configure yum for access to the Oracle Linux Cloud Native Environment - Developer Preview yum repository by installing the oracle-olcne-release-el7 RPM:

sudo yum install oracle-olcne-release-el7

Now, install kata-runtime:

sudo yum install kata-runtime

To make the kata-runtime an available runtime in Docker, modify Docker settings in /etc/sysconfig/docker. Make sure SELinux is not enabled.

The line that starts with OPTIONS should look like this:

$ grep OPTIONS /etc/sysconfig/docker OPTIONS='-D --add-runtime kata-runtime=/usr/bin/kata-runtime'

Next, restart Docker:

sudo systemctl daemon-reload sudo systemctl restart docker Run a Container Using Oracle Container Runtime for Kata

Now you can use the usual docker command to run a container with the --runtime option to indictate you want to use kata-runtime. For example:

sudo docker run --rm --runtime=kata-runtime oraclelinux:7 uname -r Unable to find image 'oraclelinux:7' locally Trying to pull repository docker.io/library/oraclelinux ... 7: Pulling from docker.io/library/oraclelinux 73d3caa7e48d: Pull complete Digest: sha256:be6367907d913b4c9837aa76fe373fa4bc234da70e793c5eddb621f42cd0d4e1 Status: Downloaded newer image for oraclelinux:7 4.14.35-1909.1.2.el7.container

To review what happened here. Docker, via the kata-runtime instructed KVM and QMEU to start a VM based on a special purpose kernel and minimized OS image. Inside the VM a container was created, which ran the uname -r command. You can see from the kernel version that a “special” kernel is running.

Running a container this way, takes more time than a traditional container based on namespaces and cgroups, but if you consider the fact that a whole VM is launched, it’s quite impressive. Let’s compare:

# time docker run --rm --runtime=kata-runtime oraclelinux:7 echo 'Hello, World!' Hello, World! real 0m2.480s user 0m0.048s sys 0m0.026s # time docker run --rm oraclelinux:7 echo 'Hello, World!' Hello, World! real 0m0.623s user 0m0.050s sys 0m0.023s

That’s about 2.5 seconds to launch a Kata Container versus 0.6 seconds to launch a traditional container.

Conclusion

Kata Containers represent an important phenomenon in the evolution of cloud native technologies. They address both the need for security through virtual machine isolation as well as speed of development through seamless integration into the existing container ecosystem without compromising on computing density.

In this blog post I’ve described some of the history that brought us Kata Containers as well as showed how you can experiment with them yourself with packages using Oracle Container Runtime for Kata.

 

14th March 2019 |

Getting Your Feet Wet With OCI Streams

Back in December we announced the development of a new service on Oracle Cloud Infrastructure called Streaming.  The announcement, product page and documentation have a ton of use cases and information on why you might use Streaming in your applications, so let's take a look at the how.  The OCI Console allows you to create streams and test them out via the UI dashboard, but here's a simple example of how to both publish and subscribe to a stream in code via the OCI Java SDK.

First you'll need to create a stream.  You can do that via the SDK, but it's pretty easy to do via the OCI Console.  From the sidebar menu, select Analytics - Streaming and you'll see a list of existing streams in your tenancy and selected compartment.

Click 'Create Stream' and populate the dialog with the information requested:

After your stream has been created you can view the Stream Details page, which looks like this:

As I mentioned above, you can test out stream publishing by clicking 'Produce Test Message' and populating the message and then test receiving by refreshing the list of 'Recent Messages' on the bottom of the Stream Details page.

To get started working with this stream in code, download the Java SDK (link above) and make sure it's on your classpath.  After you've got the SDK ready to go, create an instance of a StreamClient which will allow you to make both 'put' and 'get' style requests.  Producing a message to the stream looks like so:

Reading the stream requires you to work with a Cursor.  I like to work with group cursors, because they handle auto committing so I don't have to manually commit the cursor, and here's how you'd create a group cursor and use it to get the stream messages.  In my application I have it in a loop and reassign the cursor that is returned from the call to client.getMessages() so that the cursor always remains open and active.

And that's what it takes to create a stream, produce a message and read the messages from the stream.  It's not a difficult feature to implement and the performance is comparable to Apache Kafka in my observations, but it's nice to have a native OCI offering that integrates well into my application.  There are also future integration plans for upcoming OCI services that will eventually allow you to publish to a stream, so stay tuned for that.

 

13th March 2019 |

OCI New Service Roundup
This blog was originally published by Jesse Butler on the Cloud Native blog. 

 

13th March 2019 |

Nine Ways Oracle Cloud is Open

In the recent Break New Ground paper, 10 Predictions for Developers in 2019, openness was cited as a key factor. Developers want to choose their clouds based on openness. They want a choice of languages, databases, and compute shapes, among other things. This allows them to focus on what they care about – creating – without ops concerns or lock in. In this post, we outline the top ways that Oracle is delivering a truly open cloud. 

Databases

Oracle Cloud’s Autonomous Database, which is built on top of Oracle Database, conforms to open standards, including ISO SQL:2016, JDBC, Python PEP 249, ODBC, and many more. Autonomous Database is a multi-model database and supports relational as well as non-relational data, such as JSON, Graph, Spatial, XML, Key/Value, Text, amongst others. Because Oracle Autonomous Database is built on Oracle Database technology, customers can “lift and shift” workloads from/to other Oracle Database environments, including those running on third-party clouds and on-premises infrastructure. This flexibility makes Oracle Autonomous Database a truly open cloud service compared to other database cloud services in the market. Steve Daheb from Oracle Cloud Platform provides more information in this Q&A.

In addition, Oracle MySQL continues to be the world's most popular open source database (source code) and is available in Community and Enterprise editions. MySQL implements standards such as ANSI/ISO SQL, ODBC, JDBC and ECMA. MySQL can be deployed on-premises, on Oracle Cloud, and on other clouds.

Integration Cloud

With Oracle Data Integration Platform, you can access numerous Oracle and non-Oracle sources and targets to integrate databases with applications. For example, you can use MySQL databases on a third-party cloud as a source for Oracle apps, such as ERP, HCM, CX, NetSuite, and JD Edwards. In addition, Integration Cloud allows you to integrate Oracle Big Data Cloud, Hortonworks Data Platform, or Cloudera Enterprise Hub with a variety of sources: Hadoop, NoSQL, or Oracle Database.

You can also connect apps on Oracle Cloud with third-party apps. Consider a Quote to Order system. When a customer accepts a quote, the salesperson can update it in the CRM system, leverage Oracle’s predefined integration flows, with Oracle ERP Cloud, and turn the quote into an order.  

Java

Java is one of the top programming languages on Github (Oracle Code One 2018 keynote), with over 12 million developers in the community. All development for Java happens in OpenJDK and all design and code changes are visible to the community. Therefore, the evolution of ongoing projects and features is transparent. Oracle has been talking with developers who are and aren’t using Java to ensure that Java remains open and free, while making enhancements to OpenJDK. In 2018, Oracle open sourced all remaining closed source features: Application Class Data Sharing, Project ZGC, Flight Recorder and Mission Control. In addition, Oracle delivers binaries that are pure OpenJDK code, under the GPL, giving developers freedom to distribute them with frameworks and applications.

Oracle Cloud Native Services, including Oracle Container Engine for Kubernetes

Cloud Native Services include the Oracle Container Engine for Kubernetes and Oracle Cloud Infrastructure Registry. Container Engine is based off unmodified Kubernetes codebase and clusters can support bare-metal nodes, virtual machines or heterogeneous BM/VM environments. Oracle’s Registry is based off open Docker v2 standards, allowing you to use the same Docker commands to interact with it as you would with Docker Hub. Container images can be used on-premises and on Container Engine giving you portability. It can also interoperate with third-party registries and Oracle Cloud Infrastructure Registry with third-party Kubernetes environments. In addition Oracle Functions is based off the open source Fn Project. Code written for Oracle Functions will therefore run not only on Oracle Cloud, but with Fn clusters on third-party clouds and on-premises environments as well.

Oracle offers the same cloud native capabilities as part of Oracle Linux Cloud Native Environment. This is a curated set of open source Cloud Native Computing Foundation (CNCF) projects that can be easily deployed, have been tested for interoperability, and for which enterprise-grade support is offered. With Oracle’s Cloud Native Framework, users can run cloud native applications in the Oracle Cloud and on-premises, in an open hybrid cloud and multi-cloud architecture.

Oracle Linux Operating System

Oracle Linux, which is included with Oracle Cloud subscriptions at no additional cost, is a proven, open source operating system (OS) that is optimized for performance, scalability, reliability, and security. It powers everything in the Oracle Cloud – Applications and Infrastructure services. Oracle extensively tests and validates Oracle Linux on Oracle Cloud Infrastructure, and continually delivers innovative new features to enhance the experience in Oracle Cloud.

Oracle VM VirtualBox

Oracle VM VirtualBox is the world’s most popular, open source, cross-platform virtualization product. It lets you run multiple operating systems on Mac OS, Windows, Linux, or Oracle Solaris. Oracle VM VirtualBox is ideal for testing, developing, demonstrating, and deploying solutions across multiple platforms on one machine. It supports exporting of virtual machines to Oracle Cloud Infrastructure and enables them to run on the cloud. This functionality facilitates the experience of using VirtualBox as the development platform for the cloud.

Identity Cloud Services

Oracle Identity Cloud Service provides 100% API coverage of all product capabilities for rich integration with custom applications. It allows compliance with open standards such as SCIM, REST, OAuth and OpenID Connect for easy application integrations. Customers can easily consume these APIs in their applications to take advantage of identity management capabilities.

Oracle Identity Cloud Service seamlessly interoperates with on-premises identities in Active Directory to provide Single Sign On between Cloud and On-Premises applications. Through its Identity Bridge component, Identity Cloud can synchronize all the identities and groups from Active Directory into its own identity store in the cloud. This allows organizations to take advantage of their existing investment in Active Directory. And, they can extend their services to Oracle Cloud and external SaaS applications.

Oracle Blockchain Platform

Oracle Blockchain Platform is built on open source Hyperledger Fabric making it interoperable with non-Oracle Hyperledger Fabric instances deployed in your data center or in third-party clouds. In addition, the platform uses REST APIs for plug-n-play integration with Oracle SaaS and on-premises apps such as NetSuite ERP, Flexcube core banking, Open Banking API Platform, among others.

Oracle Mobile Hub (Mobile Backend as a Service – MBaaS)

Oracle Mobile Hub is an open and flexible platform for mobile app development. With Mobile Hub, you can:

  • Develop apps for any mobile client: iOS or Android based phones

  • Connect to any backend via a standard RESTful interfaces and SOAP web services

  • Support both native mobile apps and hybrid apps. For example, you can develop with Swift or Objective C for native iOS apps, Java for native Android apps, and JavaScript for Hybrid mobile apps

In addition, Oracle Visual Builder (VB) is a cloud-based software development Platform as a Service (PaaS) and a hosted environment for your application development infrastructure. It provides an open source, standards-based solution to develop, collaborate on, and deploy applications within Oracle Cloud that provides an easy way to create and host web and mobile applications in a secure cloud environment.

Takeaway

In choosing a cloud vendor, openness can provide a significant advantage, allowing you to choose amongst languages, databases, hardware, clouds, and on-premises infrastructure.  With a free trial on Oracle Cloud, you can experience the benefits of these open technologies – no strings attached.

Feel free to start a conversation below.

 

12th March 2019 |

How to Use OSvC Restful APIs in Python: Quickly and Easily

Have you ever had to quickly act to create an automation process to process data in Oracle Service Cloud such as restore, update or even delete wrong data? If so, you'll know that there are different approaches. So what do you do? Many people have found success by writing a PHP Script and hosting it in Oracle Service Cloud Customer Portal (CP). But there are a few things you should know before you take down this road to ensure you will not overload your Customer Portal Server or create a bad experience to your end-user customers or generate extra sessions to your license compliance agreement. This post will tell you what you need to know to take a different road and with just a few lines of Python script create a process, so that will let you successfully implement integration with little time investment.

First, make sure you have python installed in your local. Take a look at python documentation online to get your first step done. If you want to play with it first, Anaconda Distribution is the easiest way to go.

Let's get it started. Here is a simple python script you can use to make a REST API request using Python. Make sure you replace variable values where it says [REPLACE ....]

import requests import json import base64 from requests.auth import HTTPBasicAuth def main(): try: site = '[REPLACE FOR YOUR SITE]' payload = {"id":[REPLACE FOR YOUR REPORT ID], "filters":[{"name": "[REPLACE FOR REPORT FIELD]","operator": {"lookupName": "="},"values":"[VALUE]"}]} response = requests.post(site +'/analyticsReportResults', auth=HTTPBasicAuth('[REPLACE FOR YOUR USER]', '[REPLACE FOR PASSWORD]'), data=json.dumps(payload)) json_data = json.loads(response.text) print(json_data['rows']) except Exception as e: print('Error: %s' % e) main()

 

Now that you know how to create a Python script to make an API request quickly, you are ready to solve data issues such as restore, backup, updates, creation, deletion, etc. You can create a request from one site to insert in another site. e.g., You have a restored place or you have a backup, then you want to create the same process to request from A to insert in B.

Make sure you won't create parallel threads that will to massive attack your OSvC Server.

Yep, that's it! I hope this helps!

 

6th March 2019 |

Kubernetes and the "Platform Engineer"

One of Oracle's top 10 predictions for developers in 2019 was that developers will need to partner with a platform engineer, which will emerge as a key new role for cloud native development.  Recent conversations with Enterprise customers have reinforced this, and it is becoming clear that a separation of concerns is emerging for those delivering production applications on top of Kubernetes infrastructure.  The application developers building the containerized apps driven by business requirements, and the “Platform Engineers”, owning and running the supporting Kubernetes infrastructure, and platform components.  For those familiar with DevOps, SRE (pick your term) – this is arguably nothing new, but the consolidation of these teams around the Kubernetes API is leading to something altogether different.  In short, the Kubernetes YAML file (via the Kubernetes API) is becoming the contract or hand-off between application developers and the platform team (or more succinctly between dev and ops).

In the beginning, there was PaaS

Well, actually there was infrastructure! – but for application developers, there was an awful lot of pieces to assemble (compute, network, storage) to deliver an application.  Technologies like Virtualization and Infrastructure as Code (Terraform et al) made it easier to automate the infrastructure part, but still, a lot of moving parts.  Early PaaS (Platform as a Service) pioneers, recognizing this complexity for developers, created (PaaS) platforms, abstracting away much of the infrastructure (and complexity), albeit for a very targeted (or “opinionated”) set of application use cases or patterns – which is fine if your application fits into that pattern, but if not, you are back to dealing with infrastructure.

Then Came CaaS

Following the success of Container technology popularized in recent years by Docker, so called “Containers as a Service” offerings emerged a few years back, sitting somewhere between IaaS and PaaS, CaaS services abstract some of the complexity of dealing with raw infrastructure, allowing teams to deploy and operate container based applications without having to build, setup and maintain their own container orchestration tooling and supporting infrastructure.

The emergence of CaaS also coincided largely with the rise of Kubernetes as the de facto standard in container orchestration.  The majority of CaaS offerings today are managed Kubernetes offerings (not all offerings are created equal though, see The Journey to Enterprise Managed Kubernetes for more details).  As discussed previously, Kubernetes has essentially become the new Operating System for the Cloud, and arguably the modern application server, as Kubernetes continues to move up the stack.  At a practical level, this means that in addition to the benefits of a CaaS described above, customers benefit from standardization, and portability of their container applications across multiple cloud providers and on-prem (assuming those providers adhere to and are conformant with upstream Kubernetes).

Build your Own PaaS?

Despite CaaS and the standardization of Kubernetes for delivering these, there is still a lot of potential complexity for developers.  With “complexity”, “cultural changes” and “lack of training” recently cited as some of the most significant inhibitors to container and Kubernetes adoption, we can see there’s still work to do.  An interesting talk at KubeCon Seattle played on this with the title: “Kubernetes is Not for Developers and Other Things the Hype Never Told You”.

Enter the platform engineer.  Kubernetes is broad and deep, and only a subset of it ultimately needs be exposed to end developers in many cases.   As an enterprise that wants to offer a modern container platform to its developers, there are a lot of common elements/tooling that every end developer/application team consuming the platform shouldn’t have to reinvent.  Examples include (but are not limited to): monitoring, logging, service mesh, secure communication/TLS, ingress controllers, network policies, admission controllers etc…  In addition to common services being presented to developers, the platform engineer can even extend Kubernetes (via extension APIs), with things like the Service Catalog/Open Service Broker to facilitate easier integration for developers with other existing cloud services, or by providing Kubernetes Operators, helpers essentially that developers can consume for creating (stateful) services in their clusters (see examples here and here).

The platform engineer then in essence, has an opportunity to carve out the right cross section of Kubernetes (hence build your own PaaS) for the business, both in terms of the services that are exposed to developers to promote reuse, but also in enforcement of business policy (security and compliance).

Platform As Code

And the fact that you can leverage the same Kubernetes API or CLI (“Kubectl”) and deployment (YAML) file to drive the above platform, has led some to talk about the approach as “Platform as code” – essentially an evolution of Infrastructure as Code, but in this case, native Kubernetes interfaces are driving the entire creation of a complete Kubernetes based application platform for enterprise consumption.

The platform engineer and the developer now have a clear separation of concerns (with the appropriate Kubernetes RBAC roles and role bindings in place!).  The platform engineer can check the complete definition of the platform described above into source control.  Similarly, the developer consuming the platform, checks their Kubernetes application definition into source control – and the Kubernetes YAML file/definition becomes the contract (and enforcement point) between the developer and platform engineer

Platform engineers ideally have a strong background in infrastructure software, networking and systems administration.  Essentially, they are working on the (Kubernetes) platform to deliver a product/service to (and in close collaboration with) end development teams.

In the future, we would expect there to be additional work in the community around both sides of this contract.  Both for developers, and how they can discover what common services are provided by the platform being offered, and for platform engineers in how they can provide (and enforce) a clear contract to their development team customers.

 

6th March 2019 |

Four New Oracle Cloud Native Services in General Availability

This post was jointly written by Product Management and Product Marketing for Oracle Cloud Native Services. 

To those who participated in the Cloud Native Services Limited Availability Program, thank you from the team! We have an important update: four more Cloud Native Services have just gone into General Availability.

Resource Manager for DevOps and Infrastructure as Code

Resource Manager is a fully managed service that uses open source HashiCorp Terraform to provision, update, and destroy Oracle Cloud Infrastructure resources at-scale. Resource Manager integrates seamlessly with Oracle Cloud Infrastructure to improve team collaboration and enable DevOps. It can be useful for repetitive deployment tasks such as replicating similar architectures across Availability Domains or large numbers of hosts. You can learn more about Resource Manager through this blog post.

Streaming for Event-based Architectures

Streaming Service provides a “pipe” to flow large volumes of data from producers to consumers. Streaming is a fully managed service with scalable and durable storage for ingesting large volumes of continuous data via a publish-subscribe (pub-sub) model. There are many use cases for Streaming: gathering data from mobile and IoT devices for real-time analytics, shipping logs from infrastructure and applications to an object store, and tracking current financial information to trigger stock transactions, to name a few. Streaming is accessible via the Oracle Cloud Infrastructure Console, SDKs, CLI, and REST API, and provides Terraform integration. Additional information on Streaming is available on this blog post.

Monitoring and Notifications for DevOps

Monitoring provides a consistent, integrated method to obtain fine-grained telemetry and notifications for your entire stack. Monitoring allows you to track infrastructure utilization and respond to anomalies in real-time. Besides performance and health metrics available out-of-the-box for infrastructure, you can get custom metrics for visibility across the stack, real-time alarms based on triggers and Notifications via email and PagerDuty. The Metrics Explorer provides a comprehensive view across your resources. You can learn more through these blog posts for Monitoring and Notifications. In addition, using the Data Source for Grafana, users can create Grafana dashboards for monitoring metrics. 

Next Steps

We would like to invite you to try these services and provide your feedback below. A free $300 trial is available at cloud.oracle.com/tryit. To evaluate other Cloud Native Services in Limited Availability, including Functions for serverless applications, please complete this sign-up form.

 

4th March 2019 |

Why You Should Be Using Grafana With OCI

A few days ago we announced the availability of the Oracle Cloud Infrastructure datasource for Grafana. I've heard about Grafana quite a bit over the past few years and it was used to monitor our cloud environment in my last project before joining Oracle, but to be perfectly honest I'd never really played around with it myself.  This week I decided to change that, and I'm really glad that I did because I've already found practical uses for it that developers who host their application in Oracle's cloud can really benefit from.  I won't go into details on how to install Grafana or configure the datasource - the post linked above does a good job of that, so please refer to that to get started.  Instead, I wanted to share an immediate benefit that I came across when I created my first dashboard.

The very first graph that I created was a simple look at my Object Storage buckets.  I kept things simple and just added 3 metrics that I thought would be useful: Object Count, Stored Bytes and Uncommitted Parts.  Here's how that graph looks as of the time I wrote this article for one of my buckets:

Notice the blue line?  Yeah, so did I.  In fact, that was the very first thing that jumped out at me.  That blue line represents 15Mb of 'uncommitted parts'.  In other words, that's storage being used for either in progress, aborted or otherwise uncommitted multipart uploads.  Now 15Mb is nothing in the scope of a large, enterprise application.  In my case it's just leftovers from when I was testing out multipart upload for another blog post.  But for some applications, this number could get large.  Really large.  A project I was on a few years ago allowed users to upload potentially very large (5-20Gb) video files and handled the uploads via multipart/chunked uploads from pretty much anywhere in the world. Which, as you can imagine, means that from really poor internet connections sometimes.  The idea that we could have been paying for potentially terabytes worth of storage for unused files kind of makes me shudder, but with Grafana on OCI you'd be able to quickly and easily keep an eye on these sorts of things.  Obviously, it goes much further than this simple example, but I think it illustrates the point well enough.

To clean things up I decided to turn to the OCI CLI and grabbed a list of the outstanding multipart uploads like so:

oci os multipart list -bn doggos --all

To clean them up, unfortunately, you have to manually abort each upload.  If you've read many of my posts, you'll know that I am a big fan of Groovy for both web and scripting, so I came up with the following quick script to loop over each stranded upload and abort them:

And cleaned up all of the abandoned multipart uploads.  How does your organization use Grafana?  Feel free to share in the comments below.

 

2nd March 2019 |

CI/CD Automation for Fn Project with Oracle FaaS and Developer Cloud Service

By this time you probably seen multiple blogs about the Fn Project - an open-source, multi-languages, container-native serverless platform. And you might have already heard that Oracle is going to offer a cloud hosted Function as a Service (FaaS) for Fn-based functions called Oracle Functions - currently in limited access (get your invite to try it out here).

So how do you create an automated CI/CD chain for your Fn functions?

Oracle Developer Cloud Service now provides built-in functionality to support you.

DevCS supports Fn Project functions life cycle command definition in our CI/CD jobs. This means that you can automate Fn build and deploy steps in a declarative way. We also added support that enables you to leverage the hosted FaaS offering in the cloud and CI/CD directly into that environment.

Here are the basic steps to get DevCS hooked up to your Fn based FaaS service running in the Oracle Cloud Infrastructure.

Your build will have several steps in it including:

Docker Login

This will let you connect to the hosted docker registry in the Oracle Cloud (OCIR)

Provide your OCIR url (phx.ocir.io for example), your user (tenancy/username), and your auth token (note this is not the password but rather the auth token you can get in identity->user->auth tokens).

Docker Login

OCIcli Configuration

The next step is to configure the access to your OCI environment - you do this by picking up the OCIcli build step. Then provide the information including your user's OCID and Fingerprint, your tenancy OCID, your region, and paste in your private key that you generated.

OCI CLI

OCI Fn Configuration

Now that your OCI connection is set, let's add the specific configuration for your FaaS instance. From the Fn menu in DevCS pick up the Fn OCI option. Configure it with the details of the Fn environment you created including the compartment ID, the provider (oracle), and the passphrase you used when you created your private key.

Your environment is now ready for using the specific Fn lifecycle commands. We are going to assume that your Fn function code is in your root directory of the Git repository you hooked up to the build job.

Fn Build

The first step will build the function for us. If the code is at the root of your Git, then you only need to specify the Registry Host (phx.ocir.io) and the username (tenant/user), you can also check the box to get verbose output from the build operation.

Fn Deploy

If the Build was successful the next step is to deploy it to our FaaS service. First make sure you created an app in your FaaS function console. Use the name of that app to fill the "Deploy to App" field. Fill out the Registry Host and Username field similar to the previous step, and don't forget to add the API URL (https://functions.us-phoenix-1.oraclecloud.com). You can then decide on some additional options such as verbose output, bumping the version of the app, etc.

Now run the Build and watch the magic take place.

Check out the video below to see it in action.

 

 

 

20th February 2019 |

Podcast: JET-Propelled JavaScript

JavaScript has been around since 1995. But a lot has changed in nearly a quarter-century. No longer limited to the browser, JavaScript has become a full fledged programming language, finding increasing use in enterprise application development. In this program a panel of experts explores the evolution of JavaScript, discusses how it is used in modern development projects, and then takes a close look at Oracle JavaScript Extension Toolkit, otherwise known as JET. Take a listen!

This program is Oracle Groundbreakers podcast #363. It was recorded on Thursday January 17, 2019.

The Panelists Listed alphabetically Joao Tiago Abreu Joao Tiago Abreu
Software Engineer and Oracle JET Specialist, Crossjoin Solutions, Portugal
Twitter  LinkedIn  Andrejus Baranovskis Andrejus Baranovskis
Oracle Groundbreaker Ambassador
Oracle ACE Director
CEO & Oracle Expert, Red Samurai Consulting
Twitter LinkedIn Luc Bors Luc Bors
Oracle Groundbreaker Ambassador
Oracle ACE Director
Partner & Technical Director, eProseed, Netherlands
Twitter LinkedIn John Brock John Brock
Senior Manager, Product Management, Development Tools, Oracle, Seattle, WA
Twitter LinkedIn  Daniel Curtis Daniel Curtis
Oracle Front End Developer, Griffiths Waite, UK
Author of Practical Oracle JET: Developing Enterprise Applications in JavaScript (June 2019, Apress)
Twitter LinkedIn    Additional Resources Coming Soon
  • DevOps, Streaming, Liquid Software, and Observability. Featuring panelists Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov
  • Polyglot Programming and GraalVM. Featuring panelists Rodrigo Botafogo, Roberto Cortez, Dr. Chris Seaton, Oleg Selajev.
  • Serverless and the Fn Project. A discussion of where Serverless fits in the IT landscape. Panelists TBD.
Subscribe Never miss an episode! The Oracle Groundbreakers Podcast is available via: Participate

If you have a topic suggestion for the Oracle Groundbreakers Podcast, or if you are interested in participating as a panelist, please post a comment. We'll get back to you right away.

 

19th February 2019 |

Setting up Oracle Cloud Infrastructure Compute and Storage for Builds on Oracle Developer Cloud

With the 19.1.3 release of Oracle Developer Cloud, we have started supporting OCI based Build slaves for the continuous integration and continuous deployment. So now you are enabled to use OCI Compute, Storage for the Build VMs and for the artifact storage respectively. This blog will help you understand how you can configure the OCI account for Compute and Storage in Oracle Developer Cloud.

How to get to the OCI Account configuration screen in Developer Cloud?

If your user has Organization Administrator privileges then you will by default land on the Organization Tab after you successfully login into you Developer Cloud instance. In the Organization screen, you need to click on the OCI Account tab.

Note: You will not be able to access this tab if you do not have the Organization Administrator privileges. 

 

Existing users of Developer Cloud will see their OCI Classic account configuration and will notice that unlike the previous version, both Compute and Storage configuration have now been consolidated to a single screen. Click on the Edit button for configuring the OCI account.

Click on the OCI radio button to get the form for configuring OCI account. This wizard will help you configure both compute and storage for OCI to be used on Developer Cloud.

 

 

Before we start to understand, what each of the fields in the wizard means and where to retrieve its value from the OCI console, let us understand what does the message displayed on top of the Configure OCI Account wizard(as shown in the screenshot below) means:

 

It means that, if you change from OCI Classic to OCI Account, the Build VMs that were created using  Compute on OCI Classic will now be migrated to OCI based Build VMs. It also gives the count of the existing Build VMs created using OCI Classic compute that will be migrated. This change will also result in the migration of the build and Maven artifacts from Storage Classic to OCI storage automatically.

Prerequisite for the OCI Account configuration:

You should have access to the OCI account and you should also have a native OCI user with the Admin privilege created in the OCI instance.

Note: You will not be able to use the IDCS user or the user with which you are able to login into the Oracle Cloud Myservices console, until and unless that user also exists as native OCI user.

By native user, it means that you should be able to see the user (eg: ociuser) in the Governance & Administration > Identity > Users tab on the OCI console as shown in the screenshot below. If not then you will have to go ahead and create a user following this link.

OCI Account Configuration:

Below are the list of values, explanation of what it is and finally a screenshot of OCI console to show where it can be found. You will need these values to configure the OCI account in Developer Cloud.

Tenancy OCID - This is the cloud tenancy identifier in OCI. Go to Governance and Administration > Administration > Tenancy Details in the OCI console. Under Tenancy Information, click on the Copy link for the Tenancy OCID.

 

User OCID: ID for the native OCI user. Go to Governance and Administration > Identity > Users in the OCI console. For the user of your choice click on the Copy link for the User OCID.

 

Home Region: On the OCI console look at the right-hand top corner and you should find the region for your tenancy, as highlighted in the screenshot below.

 

Private Key: The user has to generate a Public and Private Key pair in the PEM format. The Public key in the PEM format has to be configured in the OCI console. Use this link to see understand how you can create the Public and Private Key Pair.  You will have to go to Governance and Administration > Identity > Users in the OCI console. Select the user by clicking on the username link and then click on the Add Public Key button and then configure the Public Key here. While the Private key needs to be copied in the Private Key field of the Configure OCI Account wizard in Developer Cloud.

 

Passphrase: If you have given any passphrase while generating the Private Key, then you will have to configure the same here, else you can leave it empty.

Fingerprint: It is the fingerprint value of the OCI user who’s OCID you had copied earlier from the OCI console. You will have to go to Governance and Administration > Identity > Users in the OCI console. Select the user by clicking on the username link and for the Public Key created, copy the fingerprint value as shown in the screenshot below.

 

Compartment OCID: You can either select the root compartment for which the OCID would be the same as the Tenancy OCID. But it is recommended that you create a separate compartment for the Developer Cloud Build VMs for the better management. You can create a new compartment by going to Governance and Administration > Identity > Compartments in the OCI console and then click on the Create Compartment button, give the Compartment Name, Description values of your choice and select the root compartment as the Parent Compartment.

Click on the link in the OCID column for the compartment that you have created and then click on the Copy link to copy the DevCSBuild compartment OCID.

 

Storage Namespace: This is the Storage Namespace where the artifacts will be stored on the OCI. Go to Governance and Administration > Administration > Tenancy Details in the OCI console. Under Object Storage Settings, copy the Storage Namespace name as shown in the screenshot below.

 

After you have entered all the values, select the checkbox to accept the terms and conditions. Click the Validate button, if validation is successful, then click the Save button to complete the OCI Account configuration. 

 

You will get a confirmation dialog for the account switch from OCI Classic to OCI. Select the checkbox and click the Confirm button. By doing this you are giving your consent to migrate the VMs, build and Maven artifacts to OCI compute and storage respectively. This action will also remove the artifacts from the Storage classic.

On confirmation, you should see the OCI Account configured with the provided details. You can edit it at any point of time by clicking the Edit button.

 

You can check for the Maven and build artifacts in the projects to confirm the migration.

 

To know more about Oracle Developer Cloud, please refer the documentation link.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

 

9th February 2019 |

Oracle Cloud on a Roll: From One 'Next Big Things' Session to Another…

The Oracle Open World Showcase in London this January

We wrapped up an exciting Open World in London last month with a spotlight on all things Oracle Cloud. Hands-on Labs and demos took center stage to showcase the hottest use cases in apps and converged infrastructure (IaaS + PaaS). 

From autonomous databases and analytics, platform solutions for SaaS -  like a digital assistant (Chatbot), app and data integration, and API gateways for any SaaS play across verticals, and cloud-native application development on OCI, we ran a series of use cases. Several customers joined us on stage for various keynote streams to share their experience and to demonstrate the richness of Oracle’s offering.

Macty’s (an Oracle Global Startup Ecosystem Partner) Move from AWS to the Oracle Cloud

Macty is one such customer who transitioned out of AWS to Oracle Cloud to build their fashion e-commerce platform with a focus on AI/ML to power visual search. Navigating AWS was hard for Macty. Expensive support, complex pricing choices, lack of automated backups for select devices, and delays in getting to the support workforce were some of the reasons why Macty embarked on to Oracle’s Cloud Infrastructure.

Macty used Oracle’s bare metal GPU to train deep learning models. They used the compartments to isolate and use the correct billing for customers and the DevCS platform (Terraform and Ansible) to update and check the environment from a deployment and configuration perspective.

Macty’s CEO @Susana Zoghbi presented the Macty success story with the VP of Oracle Cloud, Ashish Mohindroo. She demonstrated the power of the Macty chatbot (through Facebook Messenger) that was built on Oracle’s platform to enable e-commerce vendors to engage with their customers better. 

The other solutions that Macty brings with their AI/API powered platform are: a recommendation engine to complete the look in real time, find similar items, customize the fashion look, and get customer analytics to connect e-commerce with the in-store experience. Any of these features can be used by e-commerce stores to delight their customers and up their game against big retailers.

And now, Oracle Open World is Going to Dubai!

Ashish Mohindroo, VP of Oracle Cloud will be keynoting the Next Big Things session again and this time at the Oracle Open World in Dubai next week. He will be accompanied by Asser Smidt, Founder of BotSupply (an Oracle Global Startup Ecosystem Partner). BotSupply assists companies with conversational bots, have an award-winning multi-lingual NLP and are also a leader in conversational design.

While Ashish and Asser are going to explore conversational AI and design via bots powered by Oracle cloud, Ashish is also going to elaborate on how Oracle Blockchain and Oracle IoT are becoming building blocks for extending modern applications in his ‘Bringing Enterprises to Blockchain’ session. He will be accompanied by Ghassan Sarsak from ICS Financial Services, and Thrasos Thrasyvoulu from the Oracle Cloud Platform App Dev team.

Last, but never the least, Ashish will explain how companies can build compelling user interfaces with augmented reality (AR/VR) and show how content is at the core of this capability. Oracle content cloud makes it easy for customers to build these compelling experiences on any channel: mobile, web, and other device. If you're in Dubai next week, swing by Open World to catch the action.