Oracle Blogs | Oracle Developers Blog

 

16th January 2019 |

Podcast: Database Golden Rules: When (and Why) to Break Them

American inventor Thomas Edison once said, “Hell, there are no rules here. We're trying to accomplish something.”

What we hope to accomplish with this episode of the Groundbreaker Podcast is an exploration of the idea that the evolution in today’s architectures makes it advantageous, perhaps even necessary, to challenge some long-established concepts that have achieved “golden rule” status as they apply to the use of databases.

Does ACID (Atomicity, Consistency, Isolation, and Durability) still carry as much weight? In today’s environments, how much do rigorous data integrity enforcement, data normalization, and data freshness matter? This program explores those and other questions.

Bringing their insight and expertise to this discussion are three recognized IT professionals who regularly wrestle with balancing the rules with innovation. If you’ve struggled with that same balancing act, you’ll want to listen to this program.

The Panelists Listed alphabetically Heli Helskyaho Heli Helskyaho
CEO, Miracle Finland Oy, Finland
Oracle ACE Director
Twitter LinkedIn  Lucas Jellema Lucas Jellema
CTO, Consulting IT Architect, AMIS Services, Netherlands
Oracle ACE Director
Oracle Groundbreaker Ambassador
Twitter LinkedIn  Guido Schmutz Guido Schmutz
Principal Consultant, Technology Manager, Trivadis, Switzerland
Oracle ACE Director
Oracle Groundbreaker Ambassador
Twitter LinkedIn 

 

Additional Resources Coming Soon
  • Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov discuss DevOps, streaming, liquid software, and observability in this podcast captured during Oracle Code One 2018
  • What's Up with Serverless? A panel discussion of where Serverless fits in the IT landscape.
  • JavaScipt Development and Oracle JET. A discussion of the evolution of JavaScript development and the latest features in Oracle JET.
Subscribe Never miss an episode! The Oracle Groundbreakers Podcast is available via: Participate

If you have a topic suggestion for the Oracle Groundbreakers Podcast, or if you are interested in participating as a panelist, please let me know by posting a comment. I'll get back to you right away.

 

10th January 2019 |

Automated Generation For OCI IAM Policies

As a cloud developer evangelist here at Oracle, I often find myself playing around with one or more of our services or offerings on the Oracle Cloud.  This of course means I end up working quite a bit in the Identity and Access Management (IAM) section of the OCI Compute Console.  It's a pretty straightforward concept, and likely familiar if you've worked with any other cloud provider.  I won't give a full overview here about IAM as it's been covered plenty already and the documentation is concise and easy to understand.  But one task that always ends up taking me a bit longer to accomplish than I'd like it to is IAM policy generation.  The policy syntax in OCI is as follows:

Allow <subject> to <verb> <resource-type> in <location> where <conditions>

Which seems pretty easy to follow - and it is.  The issue that I often have though is actually remembering the values to plug in for the variable sections of the policy.  Trying to remember the exact group name, or available verbs and resource types, as well as the exact compartment name that I want the policy to apply to is troublesome and usually ends up with me opening two or three tabs to look up exact spellings and case and then flipping over to the docs to get the verb and resource type just right.  So, I decided to do something to make my life a little easier when it comes to policy generation and figured that I'd share it with others in case I'm not the only one who struggles with this.  

So, born out of my frustration and laziness, I present a simple project to help you generate IAM policies for OCI.  The tool is intended to be run from the command line and prompts you to make selections for each variable.  It gives you choices of available options based on actual values from your OCI account.  For example, if you choose to create a policy targeting a specific group, the tool gives you a list of your groups to choose from.  Same with verbs and resource types - the tool has a list of them built in and lets you choose which ones you are targeting instead of referring to the IAM policy documentation each time.  Here's a video demo of the tool in action:

The code itself isn't a masterpiece - there's hardcoded values for verbs and resource types because those aren't exposed via the OCI CLI or SDK in anyway.  But it works, and makes policy generation a bit less painful.  The code behind the tool is located on GitHub, so feel free to submit a pull request to keep the tool up to date or enhance it in any way.  It's written in Groovy and can be run as a Groovy script, or via java -jar.  If you'd rather just get your hands on the binary and try it out, grab the latest release and give it a shot.

The tool uses the OCI CLI behind the scenes to query the OCI API as necessary.  You'll need to make sure the OCI CLI is installed and configured on your machine before you generate a policy.  I decided to use the CLI as opposed to the SDK in order to minimize external dependencies and keep the project as light as possible while still providing value.  Besides, the OCI CLI is pretty awesome and if you work with the Oracle Cloud you should definitely have it installed and be familiar with it.

Please check out the tool and as always, feel free to comment below if you have any questions or feedback.

 

7th January 2019 |

New Features in Oracle Visual Builder for the New Year

 

3rd January 2019 |

Controlling Your Cloud - A Look At The Oracle Cloud Infrastructure Java SDK

A few weeks ago our cloud evangelism team got the opportunity to spend some time on site with some amazing developers from one of Oracle's clients in Santa Clara, CA for a 3-day cloud hackfest.  During the event, one of the developers mentioned that a challenge his team faced was handling file uploads for potentially extremely large files.  I've faced this problem before as a developer and it's certainly challenging.  The web just wasn't really built for large file transfers (though, things have gotten much better in the past few years as we'll discuss later on).  We didn't end up with an opportunity to fully address the issue during the hackfest, but I promised the developer that I would follow-up with a solution after digging deeper into the Oracle Cloud Infrastructure APIs once I got back home.  So yesterday I got down to digging into the process and engineered a pretty solid demo for that developer on how to achieve large file uploads to OCI Object Storage, but before I show that solution I wanted to give a basic introduction to working with your Oracle Cloud via the available SDK so that things are easier to follow once we get into some more advanced interactions. 

Oracle offers several other SDKs (Python, Ruby and Go), but since I typically write my code in Groovy I went with the Java SDK.  Oracle provides a full REST API for working with your cloud, but the SDK provides a nice native solution and abstracts away some of the painful bits of signing your request and making the HTTP calls into a nice package that can be bundled within your application. The Java SDK supports the following OCI services:

  • Audit
  • Container Engine for Kubernetes
  • Core Services (Networking, Compute, Block Volume)
  • Database
  • DNS
  • Email Delivery
  • File Storage
  • IAM
  • Load Balancing
  • Object Storage
  • Search
  • Key Management

Let's take a look at the Java SDK in action, specifically how it can be used to interact with the Object Storage service.  The SDK is open source and available on GitHub.  I created a very simple web app for this demo.  Unfortunately, the SDK is not yet available via Maven (see here), so step one was to download the SDK and include it as a dependency in my application.  I use Gradle, so I dropped the JARs into a "libs" directory in the root of my app and declared the following dependencies block to make sure that Gradle picked up the local JARs (the key being the "implementation" method on line 8):

The next step is to create some system properties that we'll need for authentication and some of our service calls.  To do this, you'll need to set up some config files locally and generate some key pairs, which can be mildly annoying at first, but once you're set up you're good to go in the future and you get the added bonus of being set up for the OCI CLI if you want to use it later on.  Once I had the config file and keys generated, I set my props into a file in the app root called 'gradle.properties'.  Using this properties file and the key naming convention shown below Gradle makes the variables available within your build script as system properties.

Note that having the variables as system properties in your build script does not make them available within your application, but to do that you can simply pass them in via your 'run' task:

Next, I created a class to manage the provider and service clients.  This class only has a single client right now, but adding additional clients for other services in the future would be trivial.

I then created an 'ObjectService' for working with the Object Storage API.  The constructor accepts an instance of the OciClientManager that we looked at above, and sets some class variables for some things that are common to many of the SDK methods (namespace, bucket name, compartment ID, etc):

At this point, we're ready to interact with the SDK.  As a developer, it definitely feels like an intuitive API and follows a standard "request/response" model that other cloud providers use in their APIs as well.  I found myself often simply guessing what the next method or property might be called and often being right (or close enough for intellisense to guide me to the right place).  That's pretty much my benchmark for a great API - if it's intuitive and doesn't get in my way with bloated authentication schemes and such then I'm going to love working with it.  Don't get me wrong, strong authentication and security are assuredly important, but the purpose of an SDK is to hide the complexity and expose a method to use the API in a straightforward manner.  All that said, let's look at using the Object Storage client.  

We'll go rapid fire here and show how to use the client to do the following actions (with a sample result shown after each code block):

  1. List Buckets
  2. Get A Bucket
  3. List Objects In A Bucket
  4. Get An Object

List Buckets:

 

Get Bucket:

List Objects:

Get Object:

The 'Get Object' example also contains an InputStream containing the object that can be written to file.

As you can see, the Object Storage API is predictable and consistent.  In another post, we'll finally tackle the more complex issue of handling large file uploads via the SDK.

 

2nd January 2019 |

Controlling Your Cloud - Uploading Large Files To Oracle Object Storage

In my last post, we took an introductory look at working with the Oracle Cloud Infrastructure (OCI) API with the OCI Java SDK.  I mentioned that my initial motivation for digging into the SDK was to handle large file uploads to OCI Object Storage, and in this post, we'll do just that.  

As I mentioned, HTTP wasn't originally meant to handle large file transfers (Hypertext Transfer Protocol).  Rather, file transfers were typically (and often, still) handled via FTP (File Transfer Protocol).  But web developers deal with globally distributed clients and FTP requires server setup, custom desktop clients, different firewall rules and authentication which ultimately means large files end up getting transferred over HTTP/S.  Bit Torrent can be a better solution if the circumstances allow, but distributed files aren't often the case that web developers are dealing with.  Thankfully, many advances in HTTP over the past several years have made large file transfer much easier to deal with, the main advance being chunked transfer encoding (known as "chunked" or "multipart" file upload).  You can read more about Oracle's support for multipart uploading, but to explain it in the simplest possible way a file is broken up into several pieces ("chunks"), uploaded (at the same time, if necessary), and reassembled into the original file once all of the pieces have been uploaded.

The process to utilize the Java SDK for multipart uploading involves, at a minimum, three steps.  Here's the JavaDocs for the SDK in case you're playing along at home and want more info.

  1. Initiate the multipart upload
  2. Upload the individual file parts
  3. Commit the upload

The SDK provides methods for all of the steps above, as well as a few additional steps for listing existing multipart uploads, etc.  Individual parts can be up to 50 GiB.  The SDK process using the ObjectClient (see the previous post) necessary to complete the three steps above are explained as such:

1.  Call ObjectClient.createMultipartUpload, passing an instance of a CreateMultipartUploadRequest (which contains an instance of CreateMultipartUploadRequestDetails)

To break down step 1, you're just telling the API "Hey, I want to upload a file.  The object name is "foo.jpg" and it's content type is "image/jpeg".  Can you give me an identifier so I can associate different pieces of that file later on?"  And the API will return that to you in the form of a CreateMultipartUploadResponse.  Here's the code:

So to create the upload, I make a call to /oci/upload-create and pass the objectName and contentType param.  I'm invoking it via Postman, but this could just as easily be a fetch() call in the browser:

So now we've got an upload identifier for further work (see "uploadId", #2 in the image above).  On to step 2 of the process:

2.  Call ObjectClient.uploadPart(), passing an instance of UploadPartRequest (including the uploadId, the objectName, a sequential part number, and the file chunk), which receives an UploadPartResponse.  The response will contain an "ETag" which we'll need to save, along with the part number, to complete the upload later on.

Here's what the code looks like for step 2:

And here's an invocation of step 2 in Postman, which was completed once for each part of the file that I chose to upload.  I'll save the ETag values along with each part number for use in the completion step.

Finally, step 3 is to complete the upload.

3.  Call ObjectClient.commitMultipartUpload(), passing an instance of CommitMultipartUploadRequest (which contains the object name, uploadId and an instance of CommitMultipartUploadDetails - which itself contains an array of CommitMultipartUploadPartDetails).

Sounds a bit complicated, but it's really not.  The code tells the story here:

When invoked, we get a simple result confirming the completion of the multipart upload commit!  If we head over to our bucket in Object Storage, we can see the file details for the uploaded and reassembled file:

And if we visit the URL via a presigned URL (or directly, if the bucket is public), we can see the image.  In this case, a picture of my dog Moses:

As I've hopefully illustrated, the Oracle SDK for multipart upload is pretty straightforward to use once it's broken down into the steps required.  There are a number of frontend libraries to assist you with multipart upload once you have the proper backend service in place (in my case, the file was simply broken up using the "split" command on my MacBook).  

 

19th December 2018 |

Podcast: REST or GraphQL? An Objective Comparison

Are you a RESTafarian? Or are you a GraphQL aficionado? Either way you'll want to listen to the latest Oracle Groundbreaker Podcast, as a panel of experts weighs the pros and cons of each technology.

Representational State Transfer, known to its friends as REST, has been around for nearly two decades and has a substantial following. GraphQL, on the other hand, became publicly available in 2015, and only a few weeks ago moved under the control of the GraphQL Foundation, a project of the Linux Foundation. But despite its relative newcomer status, GraphQL has gained a substantial following of its own.

So which technology is best suited for your projects? That's your call. But this discussion will help you make that decision, as the panel explores essential questions, including: 

  • What circumstances or conditions favor one over the other?
  • How do the two technologies complement each other?
  • How difficult is it for long-time REST users to make the switch to GraphQL?

This program is Oracle Groundbreak Podcast #361. It was recorded on Wednesday December 12, 2018. Listen!

 

The Panelists Luis Weir Luis Weir
CTO | Oracle Practice, Capgemini
Twitter LinkedIn Oracle Groundbreaker Ambassssdor; Oracle ACE Director Chris Kincanon Chris Kincanon
Engineering Manager / Technical Product Owner, Spreemo
Twitter LinkedIn  Dolf Dijkstra Dolf Dijkstra
Consulting Solutions Architect | A-Team - Cloud Solutions Architect, Oracle
Twitter LinkedIn James Neate James Neate
Oracle PaaS Consultant, Capgemini
Twitter LinkedIn Additional Resources Coming Soon
  • Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov discuss DevOps, streaming, liquid software, and observability in this podcast captured during Oracle Code One 2018.
  • Database: Breaking the Golden Rules: There comes a time question, and even break,  long-established rules. This program presents a discussion of the database rules that may no longer be advantageous. 
  • What's Up with Serverless? A panel discussion of where Serverless fits in the IT landscape.
Subscribe

Never miss an episode! The Oracle Groundbreakers Podcast is available via:

 

17th December 2018 |

Announcing Oracle Cloud Infrastructure Resource Manager

We are excited to announce a new service, Oracle Cloud Infrastructure Resource Manager, that makes it easy to manage your infrastructure resources on Oracle Cloud Infrastructure. Resource Manager enables you to use infrastructure as code (IaC) to automate provisioning for infrastructure resources such as compute, networking, storage, and load balancing.

Using IaC is a DevOps practice that makes it possible to provision infrastructure quickly, reliably, and at any scale. Changes are made in code, not in the target systems. That code can be maintained in a source control system, so it’s easy to collaborate, track changes, and document and reverse deployments when required.

HashiCorp Terraform

To describe infrastructure Resource Manager uses HashiCorp Terraform, an open source project that has become the dominant standard for describing cloud infrastructure. Oracle is making a strong commitment to Terraform and will enable all its cloud infrastructure services to be managed through Terraform. Earlier this year we released the Terraform Provider, and we have started to submit Terraform modules for Oracle Cloud Infrastructure to the Terraform Module Registry. Now we are taking the next step by providing a managed service.

Managed Service

In addition to the provider and modules, Oracle now provides Resource Manager, a fully managed service to operate Terraform. Resource Manager integrates with Oracle Cloud Infrastructure Identity and Access Management (IAM), so you can define granular permissions for Terraform operations. It further provides state locking, gives users the ability to share state, and lets teams collaborate effectively on their Terraform deployments. Most of all, it makes operating Terraform easier and more reliable.

With Resource Manager, you create a stack before you run Terraform actions. Stacks enable you to segregate your Terraform configuration, where a single stack represents a set of Oracle Cloud Infrastructure resources that you want to create together. Each stack individually maps to a Terraform state file that you can download.

To create a stack, you define a compartment and upload the Terraform configuration while creating this stack. This zip file contains all the .tf files that define the resources that you want to create. You can optionally include a variables.tf file or define your variables in a (key,value) format on the console.

After your stack is created, you can run different Terraform actions like planapply, and destroy on this stack. These Terraform actions are called jobs. You can also update the stack by uploading a new zip file, download this configuration, and delete the stack when required.

Plan: Resource Manager parses your configuration and returns an execution plan that lists the Oracle Cloud Infrastructure resources describing the end state.

Apply: Resource Manager creates your stack based on the results of the plan job. After this action is completed, you can see the resources that have been created successfully in the defined compartments.

Destroy: Terraform attempts to delete all the resources in the stack.

You can define permissions on your stacks and jobs through IAM policies. You can define granular permissions and let only certain users or groups perform actions like plan, apply, or destroy.

Availability

Resource Manager will become generally available in early 2019. We are currently providing access to selected customers through our Cloud Native Limited Availability Program. The currently available early version offers access to the Compute, Networking, Block Storage, Object Storage, IAM, and Load Balancing services. To learn more about Resource Manager or to request access to the technology, please register.

 

17th December 2018 |

Building the Oracle Code One 2018 Escape Rooms

By Chris Bensen, Cloud Experience Developer at Oracle

I’ve built a lot of crazy things in my life but the “Escape Rooms” for Code One 2018 might just be one of the craziest. And funnest! The initial idea for our escape room came from Toni Epple where a Java based escape room was built for a German conference. We thought it was rather good, and escape rooms are trendy and fun so we decided to dial it up to eleven for 2018 Code One attendees. The concept was to have two escape rooms, one with a Java developer theme and one with the superhero theme of the developer keynote, and that’s when Duke’s Lab and Superhero Escape were born.

We wanted to build a demo that was different than what is normally at a conference and make the rooms feel like real rooms. I actually built two rooms with 2x4 construction in my driveway. Each room consisted of two eight foot cubed rooms that could be split in two pieces for easy shipping. And shipping wasn’t easy as we only had 1/4” of clearance! Inside the walls were faux brick to have the Brooklyn New York look and feel where many of the Marvel comics take place. The faux brick is a particle board product that can be purchased at your favorite local hardware store and is fire retardant so it’s a turnkey solution.


 

Many escape rooms contain physical puzzles and with CodeOne being a conference about programming languages it seemed fitting to infuse electronics and software into each puzzle. Each room was powered by a 24 volt 12 amp power supply which is the same power supply used to power an Ultimaker 3D printers. Using voltage regulators this was stepped down to 12 volts and in some cases 5 and 3.3 volts depending on the needs. Throughout the room conduit was run with custom 3D printed outlets to power each device using aviation connectors because they are super secure.

The project took just over two months to build, over 100 unique 3D printed parts were created and four 3D printers were running nearly 24by7 to produce over 400 parts total. 8 Arduinos and 5 Raspberry Pi ran the rooms with various electronics for sensors, displays, sounds and movement. The custom software was written using Python, Bash, C/C++ and Java.

At the heart of Duke’s Lab and the final puzzle is a wooden crate with two locks. The intention was to look like something out of a Bond film or Indiana Jones. Once you open it you are presented with two devices as seen in the photo below. I wouldn’t want to ruin the surprise but let’s just say most people that open the crate get a little heart thump as the countdown timer starts ticking when the create is opened!

At the heart of Superhero Escape we have The Mighty Thor’s hammer Mjölnir, Captain America’s shield and Iron Man’s arc reactor. The idea was to bring these three props to life and integrate them into an escape room of super proportions. And given the number of people that solved the puzzle and exited the room with Cap’s shield on one arm and Mjölnir in the other, I would say it was a resounding success!

The goal and final puzzle for Superhero Escape is to wield Mjölnir. Mjölnir was held to the floor of the escape room by a very powerful electromagnet. At the heart of the hammer is a piece of solid 1” thick steel I had custom machined to my specifications connected to a pipe.

The shell is one solid 3D print taking over 24 hours and an entire 1 kilogram of filament. For those that don’t know, that is an entire roll. Exactly an entire roll!

As with any project I learned a lot. I leveraged all my knowledge of digital fabrication, traditional fabrication, electronics, programming, wood working and puzzles and did things I wasn’t sure were possible, especially in the timeframe we had. That’s what being an Oracle Groudbreaker is all about. And for all those Groudbreakers out there, keep dreaming and learning because you will never know when you’ll be asked to build something that will take every bit of knowledge you have to build something amazing.

 

11th December 2018 |

Announcing Oracle Functions

Photo by Tim Easley on Unsplash

[First posted on the Oracle Cloud Infrastructure Blog]

At KubeCon 2018 in Seattle Oracle announced Oracle Functions, a new cloud service that enables enterprises to build and run serverless applications in the cloud. 

Oracle Functions is a serverless platform that makes it easy for developers to write and deploy code without having to worry about provisioning or managing compute and network infrastructure. Oracle Functions manages all the underlying infrastructure automatically and scales it elastically to service incoming requests.  Developers can focus on writing code that delivers business value.

Pay-per-use

Serverless functions change the economic model of cloud computing as customers are only charged for the resources used while a function is running.  There’s no charge for idle time! This is unlike the traditional approach of deploying code to a user provisioned and managed virtual machine or container that is typically running 24x7 and which must be paid for even when it’s idle.  Pay-per-use makes Oracle Functions an ideal platform for intermittent workloads or workloads with spiky usage patterns. 

Open Source

Open source has changed the way businesses build software and the same is true for Oracle. Rather than building yet another proprietary cloud functions platform, Oracle chose to invest in the Apache 2.0 licensed open source Fn Project and build Oracle Functions on Fn. With this approach, code written for Oracle Functions will run on any Fn server.  Functions can be deployed to Oracle Functions or to a customer managed Fn cluster on-prem or even on another cloud platform.  That said, the advantage of Oracle Functions is that it’s a serverless offering which eliminates the need for customers to manually manage an Fn cluster or the underlying compute infrastructure. But thanks to open source Fn, customers will always have the choice to deploy their functions to whatever platform offers the best price and performance. We’re confident that platform will be Oracle Functions.

Container Native

Unlike most other functions platforms, Oracle Functions is container native with functions packaged as Docker container images.  This approach supports a highly productive developer experience for new users while allowing power users to fully customize their function runtime environment, including installing any required native libraries.  The broad Docker ecosystem and the flexibility it offers lets developers focus on solving business problems and not on figuring out how to hack around restrictions frequently encountered on proprietary cloud function platforms. 

As functions are deployed as Docker containers, Oracle Functions is seamlessly integrated with the Docker Registry v2 compliant Oracle Cloud Infrastructure Registry (OCIR) which is used to store function container images.  Like Oracle Functions, OCIR is also both serverless and pay-per-use.  You simply build a function and push the container images to OCIR which charges just for the resources used.

Secure

Security is the top priority for Oracle Cloud services and Oracle Functions is no different. All access to functions deployed on Oracle Functions is controlled through Oracle Identity and Access Management (IAM) which allows both function management and function invocation privileges to be assigned to specific users and user groups.  And once deployed, functions themselves may only access resources on VCNs in their compartment that they have been explicitly granted access to.  Secure access is also the default for function container images stored in OCIR.  Oracle Functions works with OCIR private registries to ensure that only authorized users are able to access and deploy function containers.  In each of these cases, Oracle Function takes a “secure by default” approach while providing customers full control over their function assets.  

Getting Started

Oracle Functions will be generally available in 2019 but we are currently providing access to selected customers through our Cloud Native Limited Availability Program. To learn more about Oracle Functions or to request access, please let us know by registering with this form.  You can also learn more about the underlying open source technology used in Oracle Function at FnProject.io.

 

11th December 2018 |

Announcing Oracle Cloud Native Framework at KubeCon North America 2018

This blog was originally published at https://blogs.oracle.com/cloudnative/

At KubeCon + CloudNativeCon North America 2018, Oracle has announced the Oracle Cloud Native Framework - an inclusive, sustainable, and open cloud native development solution with deployment models for public cloud, on premises, and hybrid cloud. The Oracle Cloud Native Framework is composed of the recently-announced Oracle Linux Cloud Native Environment and a rich set of new Oracle Cloud Infrastructure cloud native services including Oracle Functions, an industry-first, open serverless solution available as a managed cloud service based on the open source Fn Project.

With this announcement, Oracle is the only major cloud provider to deliver and support a unified cloud native solution across managed cloud services and on-premises software, for public cloud (Oracle Cloud Infrastructure), hybrid cloud and on-premises users, supporting seamless, bi-directional portability of cloud native applications built anywhere on the framework.  Since the framework is based on open, CNCF certified, conformant standards it will not lock you in - applications built on the Oracle Cloud Native Framework are portable to any Kubernetes conformant environment – on any cloud or infrastructure

Oracle Cloud Native Framework – What is It?

The Oracle Cloud Native Framework provides a supported solution of Oracle Cloud Infrastructure cloud services and Oracle Linux on-premises software based on open, community-driven CNCF projects. These are built on an open, Kubernetes foundation – among the first K8s products released and certified last year. Six new Oracle Cloud Infrastructure cloud native services are being announced as part of this solution and build on the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services.

Cloud Native at a Crossroads – Amazing Progress

We should all pause and consider how far the cloud native ecosystem has come – evidenced by the scale, excitement, and buzz around the sold-out KubeCon conference this week and the success and strong foundation that Kubernetes has delivered! We are living in a golden age for developers – a literal "First Wave" of cloud native deployment and technology - being shaped by three forces coming together and creating massive potential:

  • Culture: The DevOps culture has fundamentally changed the way we develop and deploy software and how we work together in application development teams. With almost a decade’s worth of work and metrics to support the methodologies and cultural shifts, it has resulted in many related off-shoots, alternatives, and derivatives including SRE, DevSecOps, AIOps, GitOps, and NoOps (the list will go on no doubt).

  • Code: Open source and the projects that have been battle tested and spun out of webscale organizations like Netflix, Google, Uber, Facebook, and Twitter have been democratized under the umbrella of organizations like CNCF (Cloud Native Computing Foundation). This grants the same access and opportunities to citizen developers playing or learning at home, as it does to enterprise developers in the largest of orgs.

  • Cloud: Unprecedented compute, network, and storage are available in today’s cloud – and that power continues to grow with a never-ending explosion in scale, from bare metal to GPUs and beyond. This unlocks new applications for developers in areas such as HPC apps, Big Data, AI, blockchain, and more. 

Cloud Native at a Crossroads – Critical Challenges Ahead

Despite all the progress, we are facing new challenges to reach beyond these first wave successes. Many developers and teams are being left behind as the culture changes. Open source offers thousands of new choices and options, which on the surface create more complexity than a closed, proprietary path where everything is pre-decided for the developer. The rush towards a single source cloud model has left many with cloud lock-in issues, resulting in diminished choices and rising costs – the opposite of what open source and cloud are supposed to provide.

The challenges below mirror the positive forces above and are reflected in the August 2018 CNCF survey:

  • Cultural Change for Developers: on premises, traditional development teams are being left behind. Cultural change is slow and hard.

  • Complexity: too many choices, too hard to do yourself (maintain, administer), too much too soon?

  • Cloud Lock-in: proprietary single-source clouds can lock you in with closed APIs, services, and non-portable solutions.

The Cloud Native Second Wave – Inclusive, Sustainable, Open

What’s needed is a different approach:

  • Inclusive: can include cloud and on-prem, modern and traditional, dev and ops, startups and enterprises

  • Sustainable: managed services versus DIY, open but curated, supported, enterprise grade infrastructure

  • Open: truly open, community-driven, and not based on proprietary tech or self-serving OSS extensions

Introducing the Oracle Cloud Native Framework – What’s New?

The Oracle Cloud Native Framework spans public cloud, on-premises, and hybrid cloud deployment models – offering choice and uniquely meeting the broad deployment needs of developers. It includes Oracle Cloud Infrastructure Cloud Native Services and the Oracle Linux Cloud Native Environment. On top of the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services, a rich set of new Oracle Cloud Infrastructure cloud native services has been announced with services across provisioning, application definition and development, and observability and analysis.

 

  • Application Definition and Development

    • Oracle Functions: A fully managed, highly scalable, on-demand, functions-as-a-service (FaaS) platform, built on enterprise-grade Oracle Cloud Infrastructure and powered by the open source Fn Project. Multi-tenant and container native, Oracle Functions lets developers focus on writing code to meet business needs without having to manage or even address the underlying infrastructure. Users only pay for execution, not for idle time.

    • Streaming: Enables applications such as supply chain, security, and IoT to collect from many sources and process in real-time. Streaming is a highly available, scalable and multi-tenant platform that makes it easy to collect and manage streaming data.

  • Provisioning

    • Resource Manager: A managed Oracle Cloud Infrastructure provisioning service based on industry standard Terraform. Infrastructure-as-code is a fundamental DevOps pattern, and Resource Manager is an indispensable tool to automate configuration and increases productivity by managing infrastructure declaratively.

  • Observation and Analysis

    • Monitoring: An integrated service that reports metrics from all resources and services in Oracle Cloud Infrastructure. Monitoring provides predefined metrics and dashboards, and also supports a service API to obtain a top-down view of the health, performance, and capacity of the system. The monitoring service includes alarms to track these metrics and act when they vary or exceed defined thresholds, helping users meet service level objectives and avoid interruptions.

    • Notification Service: A scalable service that broadcasts messages to distributed components, such as email and PagerDuty. Users can easily deliver messages about Oracle Cloud Infrastructure to large numbers of subscribers through a publish-subscribe pattern.

    • Events: Based on the CNCF Cloud Events standard, Events enables users to react to changes in the state of Oracle Cloud Infrastructure resources, both when initiated by the system or by user action. Events can store information to Object Storage, or they can trigger Functions to take actions, Notifications to inform users, or Streaming to update external services.

Use Cases for the Oracle Cloud Native Framework: Inclusive, Sustainable, Open

Inclusive: The Oracle Cloud Native Framework includes both cloud and on-prem, supports modern and traditional applications, supports both dev and ops, can be used by startups and enterprises. As an industry, we need to create more on-ramps to the cloud native freeway – in particular by reaching out to teams and technologies and connecting cloud native to what people know and work on every day. The WebLogic Server Operator for Kubernetes is a great example of just that. It enables existing WebLogic applications to easily integrate into and leverage Kubernetes cluster management. 

As another example, the Helidon project for Java creates a microservice architecture and framework for Java apps to move more quickly to cloud native.

Many Oracle Database customers are connecting cloud native applications based on Kubernetes for new web front-ends and AI/big data processing back-ends, and the combination of the Oracle Autonomous Database and OKE creates a new model for self-driving, securing, and repairing cloud native applications. For example, using Kubernetes service broker and service catalog technology, developers can simply connect Autonomous Transaction Processing applications into OKE services on Oracle Cloud Infrastructure.

 

Sustainable: The Oracle Cloud Native Framework provides a set of managed cloud services and supported on-premises solutions, open and curated, and built on an enterprise grade infrastructure. New open source projects are popping up every day and the rate of change of existing projects like Kubernetes is extraordinary. While the landscape grows, the industry and vendors must face the resultant challenge of complexity as enterprises and teams can only learn, change, and adopt so fast.

A unified framework helps reduce this complexity through curation and support. Managed cloud services are the secret weapon to reduce the administration, training, and learning curve issues enterprises have had to shoulder themselves. While a do-it-yourself approach has been their only choice up to recently, managed cloud services such as OKE give developers a chance to leapfrog into cloud native without a long and arduous learning curve.

A sustainable model – built on an open, enterprise grade infrastructure, gives enterprises a secure, performant platform from which to build real hybrid cloud deployments including these five key hybrid cloud use cases:

  1. Development and DevOps: Dev/test in the cloud, production on-prem

 

 

  1. Application Portability and Migration: enables bi-directional cloud native application portability (on-prem to cloud, cloud to on-prem) and lift and shift migrations.  The Oracle MySQL Operator for Kubernetes is an extremely popular solution that simplifies portability and integration of MySQL applications into cloud native tooling.  It enables creation and management of production-ready MySQL clusters based on a simple declarative configuration format including operational tasks such as database backups and restoring from an existing backup. The MySQL Operator simplifies running MySQL inside Kubernetes and enabling further application portability and migrations.

 

 

  1. HA/DR: Disaster recovery or high availability sites in cloud, production on-prem

  1. Workload-Specific Distribution: Choose where you want to run workloads, on-prem or cloud, based on specific workload type (e.g., based on latency, regulation, new vs. legacy)

  1. Intelligent Orchestration: More advanced hybrid use cases require more sophisticated distributed application intelligence and federation – these include cloud bursting and Kubernetes federation

 

  • Open: Over the course of the last few years, development teams have typically chosen to embrace a single-source cloud model to move fast and reduce complexity – in other words the quick and easy solution. The price they are paying now is cloud lock in resulting from proprietary services, closed APIs, and non-portable solutions. This is the exact opposite of where we are headed as an industry – fueled by open source, CNCF-based, and community-driven technologies.

 

An open ecosystem enables not only a hybrid cloud world but a truly multi-cloud world – and that is the vision that drives the Oracle Cloud Native Framework!

 

5th December 2018 |

Podcast: Inspiring Innovation and Entrepreneurism in Young People

A common thread connecting the small army of IT professionals I've met over the last 20 years is that their interest in technology developed when they were very young, and that youthful interest grew into a full-fledged career. That's truly wonderful. But what happens if a young person never has a chance to develop that interest? And what can be done to draw those young people to careers in technology? In this Oracle Groundbreakers Podcast extra you will meet someone who is dedicated to solving that very problem.

Karla Readshaw is director of development for Iridescent, a non-profit organization focused on bringing quality STEM education (science, technology, engineering, and mathematics) to young people -- particularly girls -- around the globe.

"Our end goal is to ensure that every child, with a specific focus on underrepresented groups -- women and minorities -- has the opportunity to learn, and develop curiosity, creativity and perseverance, what real leaders are made of," Karla explains in her presentation.

Iridescent, through its Technovation program, provides middle- and high-school girls with the resources to develop solutions to real problems in their local communities, "leveraging technology and engineering for social good," as Karla explains.

Over a three-month period, the girls involved in the Technovation program identify a problem within their community, design and develop a mobile app to address the issue, and then build a business around that app, all under the guidance of an industry mentor.

The results are impressive. In one example, a team of hearing-impaired girls in Brazil developed an app that teaches American Sign Language, and then developed a business around it. In another example, a group of high-school girls in Guadalajara, Mexico drew on personal experience to develop an app that strengthens the relationship between Alzheimers patients and their caregivers. And a group of San Francisco Bay area girls created a mobile app that will help those with autism to improve social skills and reduce anxiety.

Want to learn more about the Technovation program, and about how you can get involved? Just listen to this podcast. 

This program was recorded during Karla's presentation at the Women In Technology Breakfast held on October 22, 2018 as part of Oracle Code One.

Additional Resources Coming Soon
  • Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov discuss DevOps, streaming, liquid software, and observability in this podcast captured during Oracle Code One 2018.
  • GraphQL and REST: An Objective Comparison: a panel of experts weighs the pros and cons of each of these approaches in working with APIs. 
  • Database: Breaking the Golden Rules: There comes a time question, and even break,  long-established rules. This program presents a discussion of the database rules that may no longer be advantageous. 
Subscribe

Never miss an episode! The Oracle Groundbreakers Podcast is available via:

 

4th December 2018 |

Deploy containers on Oracle Container Engine for Kubernetes using Developer Cloud

In my previous blog, I described how to use Oracle Developer Cloud to build and push the Node.js microservice Docker image on DockerHub. This blog will help you understand, how to use Oracle Developer Cloud to deploy the Docker image pushed to DockerHub on Container Engine for Kubernetes.

Container Engine for Kubernetes

Container Engine for Kubernetes is a developer-friendly, container-native, enterprise-ready managed Kubernetes service for running highly available clusters with the control, security, and predictable performance of Oracle Cloud Infrastructure. Visit the following link to learn about Oracle’s Container Engine for Kubernetes:

https://cloud.oracle.com/containers/kubernetes-engine

Prerequisites for Kubernetes Deployment

  1. Access to an Oracle Cloud Infrastructure (OCI) account
  2. A Kubernetes cluster set up on OCI
    This tutorial explains how to set up a Kubernetes cluster on OCI. 

Set Up the Environment:

Create and Configure Build VM Templates and Build VMs

You’ll need to create and configure the Build VM template and Build VM with the required software, which will be used to execute the build job.

 

Click the user avatar, then select Organization from the menu. 

 

Click VM Templates then New Template. In the dialog that pops up, enter a template name, such as Kubernetes Template, select “Oracle Linux 7” for the platform, then click the Create button.  

 

After the template has been created, click Configure Software.

 

Select Kubectl and OCIcli (you’ll be asked to add Python3 3.6, as well) from the list of software bundles available for configuration, then click + to add these software bundles to the template. 

Click the Done button to complete the software configuration for that Build VM template.

           

From the Virtual Machines page, click +New VM and, in the dialog that pops up, enter the number of VMs you want to create and select the VM Template you just created (Kubernetes Template).

 

Click the Add button to add the VM.

 

Kubernetes deployment scripts

From the Project page, click the + New Repository button to add a new repository.

 

After creating the repository, Developer Cloud will bring you to the Code page, with the  NodejsKubernetes repository showing. Click the +File button to create a new file in the repository. (The README file in the repository was created when the project was created.) 

 

Copy the following script into a text editor and save the file as nodejs_micro.yaml.

apiVersion: apps/v1beta1 kind: Deployment metadata: name: nodejsmicro-k8s-deployment spec: selector: matchLabels: app: nodejsmicro replicas: 1 # deployment runs 1 pods matching the template template: # create pods using pod definition in this template metadata: labels: app: nodejsmicro spec: containers: - name: nodejsmicro image: abhinavshroff/nodejsmicro4kube:latest ports: - containerPort: 80 #Endpoint is at port 80 in the container --- apiVersion: v1 kind: Service metadata: name: nodejsmicro-k8s-service spec: type: NodePort #Exposes the service as a node port ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nodejsmicro

 

Click the Commit button to create the file and commit the code changes.

 

Click the Commit button in the Commit changes dialog that displays.

You should see the nodejs_micro.yaml file in the list of files for the NodejsKubernetes.git repository, as shown in the screenshot below.

 

Configuring the Build Job

Click Build on the navigation bar to display the Build page. Click the +New Job button to create a new build job. In the New Job dialog box, enter NodejsKubernetesDeploymentBuild for the Job name and, from the Software Template drop-down list, select Kubernetes Template as the Software Template. Then click the Create Job button to create the build job.

 

After the build job has been created, you’ll be brought to the configure screen. Click the Source Control tab and select NodejsKubernetes.git from the repository drop-down list. This is the same repository where you created the nodejs_micro.yaml file. Select master from the Branch drop-down list.

 

In the Builders tab, click the Add Builder drop-down and select OCIcli Builder from the drop-down list. 

To see what you need to fill in for each of the input fields in the OCIcli Builder form and to find out where to retrieve these values, you can either read my “Oracle Cloud Infrastructure CLI on Developer Cloud” blog or the documentation link to the “Access Oracle Cloud Infrastructure Services Using OCIcli” section in Using Oracle Developer Cloud Service.

Note: The values in the screenshot below have been obfuscated for security reasons. 

 

Click the Add Builder drop-down list again and select Unix Shell Builder.

 

In the text area of the Unix Shell Builder, add the following script that downloads the Kubernetes config file and deploys the container on Oracle Kubernetes Engine, which you created by following the instructions in my previous blog. Click the Save button to save the build job. 

 

mkdir -p $HOME/.kube oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.iad.aaaaaaaaafrgkzrwhtimldhaytgnjqhazdgmbuhc2gemrvmq2w --file $HOME/.kube/config --region us-ashburn-1 export KUBECONFIG=$HOME/.kube/config kubectl config view kubectl get nodes kubectl create -f nodejs_micro.yaml sleep 120 kubectl get services nodejsmicro-k8s-service kubectl get pods kubectl describe pods

This script creates the kube directory, uses the OCIcli command oci ce cluster to download the Kubernetes cluster config file, then sets the KUBECONFIG environment variable.

The kubectl config and get nodes commands just let you view the cluster configuration and see the node details of the cluster. The create command actually deploys the Docker container on the Kubernetes cluster. We run the get services and, get pods commands to retrieve the IP address and the port of the deployed container. Note that the nodejsmicro-k8s-service name was previously configured in the nodejs_micro.yaml file.

Note: The OCID for the cluster, mentioned in the script above, needs to be replaced by the one which you have. 

 

Click the Build Now button to start executing the Kubernetes deployment build. You can click the Build Log icon to view the build execution logs.

 

After the build job executes successfully, you can examine the build log to retrieve the IP address and the port for the deployed service on Kubernetes cluster. You’ll need to look for the IP address and the port under the deployment name you configured in the YAML file.

 

Use the IP address and the port that you retrieved in the format shown below and see the output in your browser.

http://<IP Address>:port/message

Note: The message output you see may differ from what is shown here, based on what you coded in the Node.js REST application that was containerized.

 

So, now you’ve seen how Oracle Developer Cloud streamlines and simplifies the process for managing the automation for building and deploying Docker containers on Oracle Kubernetes Engine.

Happy Coding!

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

 

27th November 2018 |

Finding Symmetry

(Originally published on Medium)

Evolving the design of Eclipse Collections through symmetry.

Got Eclipse Collections stickers?

Find the Missing Types

New Eclipse Collections types on the left add to the existing JDK types on the right

Eclipse Collections has a bunch of new types you will not find in the JDK. These types give developers useful functionality that they need. There is an extra cost to supporting additional container types, especially when you factor in having support for primitive types across these types.

These missing types are important. They help Eclipse Collections return better return types for iteration patterns.

Type Symmetry

Eclipse Collections has pretty good symmetry between object and primitive types.


The missing container types are fixed sized primitive arrays, primitive BiMaps, primitive Multimaps, and some of the primitive Intervals (only IntInterval exists today). String really only should exist as a primitive immutable collection of either char or int. Eclipse Collections has ,CharAdapter, CodePointAdapter and CodePointList which provide a rich set of iteration protocols that work with Strings.

API Symmetry

There is still much that can be done to improve the symmetry between the object and primitive APIs. There are some APIs that cannot be replicated without adding new types. For instance, it would be less than desirable to implement a primitive version of groupBy with the current Multimap implementations because the only option would be to box the primitive Lists, Sets or Bags. Since there are a large number of APIs in Eclipse Collections, I will only draw attention to some of the major APIs that do not currently have symmetry between object and primitive collections. The following methods are missing on the primitive iterables.

  1. groupBy / groupByEach
  2. countBy / countByEach
  3. aggregateBy / aggregateInPlaceBy
  4. partition
  5. reduce / reduceInPlace
  6. toMap
  7. All “With” methods

Of all the missing APIs on primitive collections perhaps the most subtle and yet glaring difference is the lack of “With” methods. It is not clear if the “With” methods would be as useful for primitive collections as they are with object collections. For some usage examples of the “With” methods on the object collection APIs, read my blog titled “Preposition Preference”. The “With” methods allow for more APIs to be used with Method References.

This is what the signatures for some of the “With” methods might look like on IntList.

<P> boolean anySatisfyWith(IntObjectPredicate<? super P> predicate, P parameter); <P> boolean allSatisfyWith(IntObjectPredicate<? super P> predicate, P parameter); <P> boolean noneSatisfyWith(IntObjectPredicate<? super P> predicate, P parameter); <P> IntList selectWith(IntObjectPredicate<? super P> predicate, P parameter); <P> IntList rejectWith(IntObjectPredicate<? super P> predicate, P parameter); Default Methods to the Rescue

The addition of default methods in Java 8 has been of tremendous help increasing the symmetry between our object and primitive APIs. In Eclipse Collections 10.x we will be able to leverage default methods even more, as we now have the ability to use container factory classes in interfaces. The following examples show how the default implementations of countBy and countByWith has been optimized using the Bags factory.

default <V> Bag<V> countBy(Function<? super T, ? extends V> function) { return this.countBy(function, Bags.mutable.empty()); } default <V, P> Bag<V> countByWith(Function2<? super T, ? super P, ? extends V> function, P parameter) { return this.countByWith(function, parameter, Bags.mutable.empty()); } More on Eclipse Collections API design

To find out more about the design of the Eclipse Collections API, check out this slide deck and the following presentation.


You can also find a set of visualizations of the Eclipse Collection library in this blog post.

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.

 

26th November 2018 |

Install Spinnaker with Halyard on Kubernetes

(Originally published on Medium)

This article will walk you through the steps that can be used to install and setup a Spinnaker instance on Kubernetes that’s behind a corporate proxy. We will use Halyard on docker to manage our Spinnaker deployment.

For a super quick installation, you can use Spinnaker’s Helm chart

Prerequisites

Make sure to take care of these prerequisites before installing Spinnaker:

  • Docker 17.x with proxies configured (click here for OL setup)
  • A Kubernetes cluster (click here for OL setup)
  • Helm with RBAC enabled (click here for generic setup)
Install Halyard on Docker

Halyard is used to install and manage a Spinnaker deployment. In fact, all production grade Spinnaker deployments require Halyard in order to properly configure and maintain Spinnaker. Let’s use Docker to install Halyard.

Create a docker volume or create a host directory to hold the persistent data used by Halyard. For the purposes of this article, let’s create a host directory and grant users full access:

mkdir halyard && chmod 747 halyard

Halyard needs to interact with your Kubernetes cluster. So we pass the $KUBECONFIG file to it. One way would be to mount a host directory into the container that has your Kubernetes cluster details. Let’s create the directory “k8s” and copy the $KUBECONFIG file and make it visible to the user inside the Halyard container.

mkdir k8s && cp $KUBECONFIG k8s/config && chmod 755 k8s/config

Time to download and run Halyard docker image:

docker run -p 8084:8084 -p 9000:9000 \ --name halyard -d \ -v /sandbox/halyard:/home/spinnaker/.hal \ -v /sandbox/k8s:/home/spinnaker/k8s \ -e http_proxy=http://<proxy_host>:<proxy_port> \ -e https_proxy=https://<proxy_host>:<proxy_port> \ -e JAVA_OPTS="-Dhttps.proxyHost=<proxy_host> -Dhttps.proxyPort=<proxy_port>" \ -e KUBECONFIG=/home/spinnaker/k8s/config \ gcr.io/spinnaker-marketplace/halyard:stable

Make sure to replace the “<proxy_host>” and “<proxy_port>” with your corporate proxy values.

Login to the “halyard” container to test the connection to your Kubernetes cluster:

docker exec -it halyard bash kubectl get pods -n spinnaker

Optionally, if you want command completion run the following inside the halyard container:

source <(hal --print-bash-completion) Set provider to “Kubernetes”

In Spinnaker terms, to deploy applications we use integrations to specific cloud platforms. We have to configure Halyard and set the cloud provider to Kubernetes v2 (manifest based) since we want to deploy Spinnaker onto a Kubernetes cluster:

hal config provider kubernetes enable

Next we create an account. In Spinnaker, an account is a named credential Spinnaker uses to authenticate against an integration provider — Kubernetes in our case:

hal config provider kubernetes account add <my_account> \ --provider-version v2 \ --context $(kubectl config current-context)

Make sure to replace “<my_account>” with an account name of your choice. Save the account name in an environment variable $ACCOUNT. Next, we need to enable Halyard to use artifacts:

hal config features edit --artifacts true Set deployment type to “distributed”

Halyard supports multiple types of Spinnaker deployments. Let’s tell Halyard that we need a distributed deployment of Spinnaker:

hal config deploy edit --type distributed --account-name $ACCOUNT Set persistent store to “Minio”

Spinnaker needs a persistent store to save the continuous delivery pipelines and other configurations. Halyard let’s you choose from multiple storage providers. For the purposes of this article, we will use “Minio”.

Let’s use Helm to install a simple instance of Minio. Run the command from outside the Halyard docker container on a node that has access to your Kubernetes cluster and Helm:

helm install --namespace spinnaker --name minio --set accessKey= <access_key> --set secretKey=<secret_key> stable/minio

Make sure to replace “<access_key>” and “<secret_key>” with values of your choosing.

If you are using a local k8s cluster with no real persistent volume support, you can pass “persistence.enabled=false” as a set to the previous Helm command. As the flag suggests, if Minio goes down, you will lose your changes.

According to the Spinnaker docs, Minio does not support versioning objects. So let’s disable versioning under Halyard configuration. Back in the Halyard docker container run these commands:

mkdir ~/.hal/default/profiles && \ touch ~/.hal/default/profiles/front50-local.yml

Add the following to the front50-local.yml file:

spinnaker.s3.versioning: false

Now run the following command to configure the storage provider:

echo $MINIO_SECRET_KEY | \ hal config storage s3 edit --endpoint http://minio:9000 \ --access-key-id $MINIO_ACCESS_KEY \ --secret-access-key

Make sure to set the $MINIO_ACCESS_KEY and $MINIO_SECRET_KEY environment variables to the <access_key> and <secret_key> values that you used when you installed Minio.

Finally, let’s enable the s3 storage provider:

hal config storage edit --type s3 Set version to “latest”

You have to select a specific version of Spinnaker and configure Halyard so it knows which version to deploy. You can view the available versions by running this command:

hal version list

Pick the latest version number from the list (or any other version that you want to deploy) and update Halyard:

hal config version edit --version <version> Deploy Spinnaker

At this point, Halyard should have all the information that it needs to deploy a Spinnaker instance. Let’s go ahead and deploy Spinnaker by running this command:

hal deploy apply

Note that first time deployments might take a while.

Make Spinnaker reachable

We need to expose the Spinnaker UI and Gateway services in order to interact with the Spinnaker dashboard and start creating pipelines. When we deployed Spinnaker using Halyard, a number of Kubernetes services get created in the “spinnaker” namespace. These services are by default exposed within the cluster (type is “ClusterIP”). Let’s change the service type of the services fronting the UI and API servers of Spinnaker to “NodePort” to make them available to end users outside the Kubernetes cluster.

Edit the “spin-deck” service by running the following command:

kubectl edit svc spin-deck -n spinnaker

Change the type to “NodePort” and optionally specify the port on which you want the service exposed. Here’s a snapshot of the service definition:

... spec: type: NodePort ports: - port: 9000 protocol: TCP targetPort: 9000 nodePort: 30900 selector: app: spin cluster: spin-deck sessionAffinity: None status: ...

Next, edit the “spin-gate” service by running the following command:

kubectl edit svc spin-gate -n spinnaker

Change the type to “NodePort” and optionally specify the port on which you want the API gateway service exposed.

Note that Kubernetes services can be exposed in multiple ways. If you want to expose Spinnaker onto the public internet, you can use a LoadBalancer or an Ingress with https turned on. You should configure authentication to lock down access to unauthorized users.

Save the node’s hostname or its IP address that will be used to access Spinnaker in an environment variable $SPIN_HOST. Using Halyard, configure the UI and API servers to receive incoming requests:

hal config security ui edit \ --override-base-url "http://$SPIN_HOST:30900" hal config security api edit \ --override-base-url "http://$SPIN_HOST:30808"

Redeploy Spinnaker so it picks up the configuration changes:

hal deploy apply

You can access the Spinnaker UI at “http://$SPIN_HOST:30900”

Create a “hello-world” application

Let’s take Spinnaker for a spin (pun intended). Using Spinnaker’s UI, let’s create a “hello-world” application. Use the “Actions” drop-down and click “Create Application”:


Once the application is created, navigate to “Pipelines” tab and click “Configure a new pipeline”:


Now add a new stage to the pipeline to create a manifest based deployment:


Under the “Manifest Configuration”, add the following as the manifest source text:

apiVersion: apps/v1 kind: Deployment metadata: labels: app: hello-world name: hello-world spec: replicas: 1 selector: matchLabels: app: hello-world template: metadata: labels: app: hello-world spec: containers: - image: '<docker_repository>:5000/helloworld:v1' name: hello-world ports: - containerPort: 80

Replace the “<docker_repository>” with the name of your internal docker registry that is made available to your Kubernetes cluster.

Let’s take a quick side tour to create a “helloworld” docker image. We will create a “nginx” based image that hosts an “index.html” file containing:

<h1>Hello World</h1>

We will then create the corresponding “Dockerfile” in the same directory that holds the “index.html” file from the previous step:

FROM nginx:alpine COPY . /usr/share/nginx/html

Next, we build the docker image by running the following command:

docker build -t <docker_repository>:5000/helloworld:v1 .

Make sure to replace the “<docker_repository>” with the name of your internal docker registry that is made available to your Kubernetes cluster. Push the docker image to the “<docker_repository>” to make it available to the Kubernetes cluster.

docker push <docker_repository>:5000/helloworld:v1

Back in the Spinnaker UI, let’s manually run the “hello-world” pipeline. After a successful execution you can drill down into the pipeline instance details:


To quickly test our hello-world app, we can create a manifest based “LoadBalancer” in the Spinnaker UI. Click the “+” icon:


Add the following service definition to create the load balancer:

kind: Service apiVersion: v1 metadata: name: hello-world spec: type: NodePort selector: app: hello-world ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 31080

Once Spinnaker provisions the load balancer, hit the hello-world app’s URL at “http://$SPIN_HOST:31080” in your browser. Voila! There you have it, “Hello World” is rendered.

Conclusion

Spinnaker is a multi-cloud continuous delivery platform for releasing software with high velocity. We used Halyard to install Spinnaker on a Kubernetes cluster and deployed a simple hello-world pipeline. Of course, we barely scratched the surface in terms of what Spinnaker offers. Head over to the guides to learn more about Spinnaker.

.cb11v2-cover {display : none !important;}

 

22nd November 2018 |

How to Connect a Go Program to Oracle Database using goracle

Given that we just released Go programming language RPMs on Oracle Linux yum server, I figured it would be a good opportunity to take the goracle driver for a spin on Oracle Linux and connect a Go program to Oracle Database. goracle implements a Go database/sql driver for Oracle Database using ODPI-C (Oracle Database Programming Interface for C)

1. Update Yum Configuration

First, make sure you have the most recent Oracle Linux yum server repo file by grabbing it from the source:

$ sudo mv /etc/yum.repos.d/public-yum-ol7.repo /etc/yum.repos.d/public-yum-ol7.repo.bak $ sudo wget -O /etc/yum.repos.d/public-yum-ol7.repo http://yum.oracle.com/public-yum-ol7.repo 2. Enable Required Repositories to install Go and Oracle Instant Client $ sudo yum -y install yum-utils $ sudo yum-config-manager --enable ol7_developer_golang111 ol7_oracle_instantclient 3. Install Go and Verify

Note that you must install git also so that go get can fetch and build the goracle module.

$ sudo yum -y install git golang $ go env GOARCH="amd64" GOBIN="" GOCACHE="/home/vagrant/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/vagrant/go" GOPROXY="" GORACE="" GOROOT="/usr/lib/golang" GOTMPDIR="" GOTOOLDIR="/usr/lib/golang/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build013415374=/tmp/go-build -gno-record-gcc-switches" $ go version go version go1.11.1 linux/amd64 4. Install Oracle Instant Client and Add its Libraries to the Runtime Link Path

Oracle Instant Client is available directly from Oracle Linux yum server. If you are deploying applications using Docker, I encourage you to check out our Oracle Instant Client Docker Image.

sudo yum -y install oracle-instantclient18.3-basic

Before you can make use of Oracle Instant Client, set the runtime link path so that goracle can find the libraries it needs to connect to Oracle Database.

sudo sh -c "echo /usr/lib/oracle/18.3/client64/lib > /etc/ld.so.conf.d/oracle-instantclient.conf" sudo ldconfig

5. Install the goracle Driver

Following the instructions from the goracle repo on GitHub:

$ go get gopkg.in/goracle.v2 6. Create a Go Program to Test your Connection

Create a file db.go as follows. Make sure you change your connect string.

package main import ( "fmt" "database/sql" _ "gopkg.in/goracle.v2" ) func main(){ db, err := sql.Open("goracle", "scott/tiger@10.0.1.127:1521/orclpdb1") if err != nil { fmt.Println(err) return } defer db.Close() rows,err := db.Query("select sysdate from dual") if err != nil { fmt.Println("Error running query") fmt.Println(err) return } defer rows.Close() var thedate string for rows.Next() { rows.Scan(&thedate) } fmt.Printf("The date is: %s\n", thedate) } 7. Run it!

Time to test your program.

$ go run db.go The date is: 2018-11-21T23:53:31Z Conclusion

In this blog post, I showed how you can install the Go programming language and Oracle Instant Client from Oracle Linux yum server and use it together with the goracle Driver to connect a Go program to Oracle Database.

References