Implement an MCP Server with ORDS and MLE JavaScript

Oracle provides MCP Server options for your database

If you are serious about building out a secure, scalable and fully integrated MCP experience in your organisation then focus on dedicated Oracle products rather than rolling your own. Start with:

If you want to explore the specification and learn some concepts then read on…

  1. 🎯 The Objective
  2. 📍 Introduction
  3. 🔍 What Is MCP?
  4. 🚀 Why ORDS + MLE JavaScript?
  5. 🛠️ MCP Handler Implementation in ORDS
    1. 🧩 1. Setup – REST enable the database schema and grant MLE privileges
    2. 🧱 2. Defining the MCP Module and Template
    3. ✍️ 3. JavaScript MLE Handler Source
  6. 🔐 Securing the MCP Handler
  7. 🧪 Deploying and Testing the MCP Server
    1. 🛰️ Deploy to Oracle Autonomous Database
    2. 🧰 Test with Tools
      1. ✅ MCP Inspector
      2. 🤖 Claude Desktop (with mcp-remote)
  8. ⚠️ Limitations and Next Steps
    1. 🔐 OAuth2 Support Limitations
    2. 🚫 Lack of Streamable HTTP and Notifications
    3. 🔧 Future Enhancements
  9. 🧠 Summary

🎯 The Objective

An example of what we want to achieve with the ORDS Concert Application data and an ORDS MCP Server Handler

📍 Introduction

The Model Context Protocol (MCP) is a growing standard for enabling AI tools to interact seamlessly via structured requests. MCP is an open standard that enables AI assistants like Claude, ChatGPT, or Cline to securely connect to external data sources and tools. Think of it as a universal adapter that lets AI models interact with the world beyond their training data.

In this article, we’ll demonstrate how to implement an MCP server directly in Oracle using ORDS (Oracle REST Data Services) and JavaScript MLE (Multilingual Engine). With just a few lines of JavaScript and SQL, you can build an HTTP-based MCP server embedded in your Oracle database. This runs securely, for free, on the hosted Autonomous Database at cloud.oracle.com. Of course, you can choose to run your own setup on-premise too.

This article continues the series on MLE-based handlers—check out earlier examples at peterobrien.blog/tag/multilingual-engine. In particular, we build on concepts introduced in JSON-RPC in ORDS Using MLE JavaScript (Oracle 23ai) and the ORDS Sample Application that is mentioned in 🎉 Data Science in Concert: Inspect Data from the ORDS Sample Application with R!

Here be dragons! Keep in mind that you are using AI tools where access to your database is being permitted. Even if you are running your LLM inside your network, you are still asking an agent to decide on actions based on what it discovers. Be careful with your data and database resources. Only make accessible what is required for the workflow you want to automate. Use the database security and auditing facilities available.

Security, Security, Security

🔍 What Is MCP?

The Model Context Protocol is a JSON-RPC–based interface for exchanging AI capabilities, models, prompts, and resource metadata. Without MCP, AI assistants are like incredibly smart people locked in a room with no internet, phone, or way to access current information or perform actions. They can only work with what they learned during training. MCP breaks down these walls by providing a standardised way for AI models to:

  • Stay current with live information
  • Access real-time data from APIs, databases, and services
  • Control applications and systems
  • Retrieve files and documents
  • Execute actions in the real world

MCP operates on a client-server architecture:

  • MCP Client: The AI assistant (like Cline or Claude)
  • MCP Server: A program that provides specific capabilities (tools, resources, or prompts)

The protocol defines how these communicate through three main primitives:

  • Resources: Static or dynamic content that the AI can reference.
  • Tools: Functions the AI can execute.
  • Prompts: Templates that help the AI understand how to use resources and tools effectively in specific contexts.

The MCP specification states that a server should supports methods such as:

  • initialize: describes the server and its capabilities
  • tools/list: enumerates available tools
  • tools/call: executes a named tool
  • prompts/list: lists available prompts
  • resources/list: lists downloadable resources

That’s not an exhaustive list and they are not all mandatory to implement. In fact, the only one truly required is initialize but you might find an MCP client that expects others to be implemented.

An MCP server can be local, on the same machine running your MCP client (using standard i/o) or remote (using streamable HTTP). The Model Context Protocol server for the Oracle Database is an example of a local MCP server which is accessed using standard i/o. In this article we’ll implement a HTTP based remote server which will not involve any streamable functionality but will support the synchronous request/response aspects of the specification. Note that the most recent version of the MCP specification(Protocol Revision: 2025-06-18) has removed server side events as a supported transport.

MCP clients (like Claude) expect a standard JSON response structure and often use OAuth2 for authentication. Not every authorisation scenario outlined in the specification will be covered in this article.

🚀 Why ORDS + MLE JavaScript?

ORDS can serve as the HTTP interface for all of this functionality — directly from your Oracle database. By combining ORDS with the JavaScript Multilingual Engine (MLE) in Oracle 23ai, you can:

  • Implement MCP server methods entirely in server-side JavaScript.
  • Query live database content when responding to MCP tool calls.
  • Package and publish database-backed tools to AI agents without building a separate backend service.
  • Secure everything with OAuth2 client credentials.

This makes it possible to turn your database into an MCP-compatible API that works with modern AI tooling—without leaving the Oracle environment and without sharing more than you need to.

In this example we’ll provide a get_events tool which lists the results of querying the SEARCH_VIEW in the ORDS_CONCERT_APP schema. That information can be gleaned from the REST Enabled endpoint for that view but your AI Assistant does not have access to that. When coding your own MCP Server through ORDS handlers you can provide access to data as you see fit.

Using the Model Context Protocol server for the Oracle Database will give full access to your database schema and therefore is the most powerful option but you’re unlikely to be sharing those DB credentials with anyone else. Implementing your own MCP Server allows you to greatly restrict that access.

🛠️ MCP Handler Implementation in ORDS

There’s some PL/SQL in this article but it is just the invocation of ORDS APIs to create the user defined resources. The real logic is in JavaScript which executes in the database. You can expand on this and implement your own logic for tools, resources or prompts. Refer to the ORDS mle/handler documentation for more information on that.

Note that although we’re using OAuth2 client credentials flow there are other authorisation options available. This is something that one must choose wisely because there are some aspects of the MCP specification with regard to authorisation discovery which is not currently supported by ORDS. Moreover, there are MCP clients that do not implement or support all the options available. In some cases, such as with Claude desktop, that support may only be available for certain paid subscriptions or premium level plans.

In this article we will protect the ORDS module and have a client credentials OAuth2 client which means we have to request an access token manually and then use it in our MCP server reference. That is not ideal but is a limitation of some of the tools used. More about limitations later.

Here’s the actual ORDS module and handler setup used in our implementation…

🧩 1. Setup – REST enable the database schema and grant MLE privileges

This article is based on the ORDS Sample Application so the database account used is ORDS_CONCERT_APP. It will already be REST Enabled but will require the privileges to execute JavaScript in the database. You can run this as an administrator user, such as ADMIN in the Autonomous Database.

GRANT EXECUTE ON JAVASCRIPT TO ORDS_CONCERT_APP;
GRANT EXECUTE DYNAMIC MLE TO ORDS_CONCERT_APP;

🧱 2. Defining the MCP Module and Template

Now, while logged in as the ORDS_CONCERT_APP user run the following to create the structure for your implementation. This will define a path /mcp/server for the service but it will require a handler for the POST method.

BEGIN
ORDS.DEFINE_MODULE(
p_module_name => 'mcp',
p_base_path => '/mcp/');

ORDS.DEFINE_TEMPLATE(
p_module_name => 'mcp',
p_pattern => 'server');

END;
/

✍️ 3. JavaScript MLE Handler Source

The handler responds to MCP methods such as initialize, tools/list, and tools/call. Here I detail the JavaScript portion so that section can be described and explained:

      (req, res) => {
        let response = {
          jsonrpc: "2.0",
          id: null
        };

        try {
          const rpc = req.body;
          response.id = rpc.id ?? null;

          if (rpc.jsonrpc !== "2.0" || !rpc.method) {
            response.error = {
              code: -32600,
              message: "Invalid Request"
            };
          } else if (rpc.method === "initialize") {
            // Respond with server capabilities
            response.result = {
              protocolVersion:"2025-06-18",
              capabilities: {
                tools: {listChanged: false}
              },
              serverInfo: {
                name: "ORDSHandlerExampleServer",
                title: "Example MCP Server implemented in ORDS mle/javascript",
                version: "0.0.1"
              }
            };
          } else if (rpc.method === "tools/list") {
            // Respond with tools list
            response.result = {
                tools: [
                    {
                        name: "get_events",
                        title: "Events Information Provider",
                        description: "Get information on ORDS (Oracle REST Data Services) Sample Application events. The events view brings together artist, venue and schedule concert data. The attributes include EVENT_NAME,ARTIST_NAME,ARTIST_ID,EVENT_ID,EVENT_DATE,EVENT_DETAILS,EVENT_STATUS_NAME,EVENT_STATUS_ID,VENUE_ID,VENUE_NAME,CITY_NAME,MUSIC_GENRES",
                        inputSchema: {
                            type: "object"
                        }
                    }
                ]
            };
          } else if (rpc.method === "tools/call") {
            // Call the get_events tool if that was the tool requested
            if (rpc.params.name === "get_events") {
                const query = ''select * from search_view'';
                const res = session.execute(query);
                textContent = [];
                for (let row of res.rows) {
                  textContent.push({type:"text", text: JSON.stringify(row)});
                };

                response.result = {
                    content: textContent,
                    "isError": false
                };
            } else {
               response.error =  {
                    code: -32602,
                    message: "Unknown tool: invalid_tool_name"
                }
            }
          } else {
            // Unsupported method
            response.error = {
              code: -32601,
              message: "Method not found"
            };
          }

        } catch (e) {
          response.error = {
            code: -32700,
            message: "Parse error",
            data: e.message
          };
        }

        res.status(200);
        res.content_type("application/json");
        res.send(JSON.stringify(response));
      }

At line 2, similar to the /rpc/handler example in the previous article, we define the result structure for a successful response. We are expecting that all requests that we should act on will have an id, so the response as an id field. We copy that across at line 9.

Lines 16 – 28 deal with initialisation which tells the MCP client about the MCP server and its capabilities. In this case we’re saying that the MCP server implementation only provides tools and it does not send notifications if the tool definitions change.

...
} else if (rpc.method === "initialize") {
response.result = {
protocolVersion: "2025-06-18",
capabilities: { tools: { listChanged: false } },
serverInfo: {
name: "ORDSHandlerExampleServer",
title: "Example MCP Server implemented in ORDS mle/javascript",
version: "0.0.1"
}
};
} else ...

Lines 29 – 42 deal with returning the list of tools that the MCP server supports. The description of the tool is important because that is what helps your AI Assistant determine if the tool will be useful for the given workflow.

...
} else if (rpc.method === "tools/list") {
response.result = {
tools: [
{
name: "get_events",
title: "Events Information Provider",
description: "Get information on ORDS (Oracle REST Data Services) Sample Application events. The events view brings together artist, venue and schedule concert data. The attributes include EVENT_NAME,ARTIST_NAME,ARTIST_ID,EVENT_ID,EVENT_DATE,EVENT_DETAILS,EVENT_STATUS_NAME,EVENT_STATUS_ID,VENUE_ID,VENUE_NAME,CITY_NAME,MUSIC_GENRES",
inputSchema: {
type: "object"
}
]
};
} else ...

Lines 43 – 62 are where the real action is. These lines deal with the invocation of the requested tool. In this case the only tool supported is called get_events which is for a specific query on the database. When you are implementing your own MCP server using ORDS handlers you can code this for your own AI integration needs. Note that the specification allows for various content types. For simplicity, in this case, we are constructing an array of text content entries. Although structuredContent could also be used here that content type is not supported by many MCP clients.

...
} else if (rpc.method === "tools/call") {
if (rpc.params.name === "get_events") {
const query = ''select * from search_view'';
const res = session.execute(query);
textContent = [];
for (let row of res.rows) {
textContent.push({type:"text", text: JSON.stringify(row)});
};

response.result = {
content: textContent,
"isError": false
};
} else {
response.error = { code: -32602, message: "Unknown tool: invalid_tool_name" };
}
} else ...

The remainder of the JavaScript code includes placeholders for listing prompts, resources and a little error handling. You can expand on these as you see fit.

Here’s the full handler definition statement to run in your database:

BEGIN
ORDS.DEFINE_HANDLER(
p_module_name => 'mcp',
p_pattern => 'server',
p_method => 'POST',
p_source_type => 'mle/javascript',
p_items_per_page => 0,
p_comments => 'MCP server handler example',
p_source =>
'
(req, res) => {
let response = {
jsonrpc: "2.0",
id: null
};

try {
const rpc = req.body;
response.id = rpc.id ?? null;

if (rpc.jsonrpc !== "2.0" || !rpc.method) {
response.error = {
code: -32600,
message: "Invalid Request"
};
} else if (rpc.method === "initialize") {
// Respond with server capabilities
response.result = {
protocolVersion:"2025-06-18",
capabilities: {
tools: {listChanged: false}
},
serverInfo: {
name: "ORDSHandlerExampleServer",
title: "Example MCP Server implemented in ORDS mle/javascript",
version: "0.0.1"
}
};
} else if (rpc.method === "tools/list") {
// Respond with tools list
response.result = {
tools: [
{
name: "get_events",
title: "Events Information Provider",
description: "Get information on ORDS (Oracle REST Data Services) Sample Application events. The events view brings together artist, venue and schedule concert data. The attributes include EVENT_NAME,ARTIST_NAME,ARTIST_ID,EVENT_ID,EVENT_DATE,EVENT_DETAILS,EVENT_STATUS_NAME,EVENT_STATUS_ID,VENUE_ID,VENUE_NAME,CITY_NAME,MUSIC_GENRES",
inputSchema: {
type: "object"
}
}
]
};
} else if (rpc.method === "tools/call") {
// Call the get_events tool if that was the tool requested
if (rpc.params.name === "get_events") {
const query = ''select * from search_view'';
const res = session.execute(query);
textContent = [];
for (let row of res.rows) {
textContent.push({type:"text", text: JSON.stringify(row)});
};

response.result = {
content: textContent,
"isError": false
};
} else {
response.error = {
code: -32602,
message: "Unknown tool: invalid_tool_name"
}
}
} else {
// Unsupported method
response.error = {
code: -32601,
message: "Method not found"
};
}

} catch (e) {
response.error = {
code: -32700,
message: "Parse error",
data: e.message
};
}

res.status(200);
res.content_type("application/json");
res.send(JSON.stringify(response));
}
');
END;
/



🔐 Securing the MCP Handler

ORDS provides built-in OAuth2 support for securing handlers. In this example, we use the Client Credentials flow, where:

  • Using a client ID and secret request that ORDS issues a bearer token.
  • In your MCP Client configure the MCP Server entry to use that bearer token.
  • Requests to /mcp/server must include an Authorization: Bearer <token> header.

To secure the handler:

DECLARE
L_PRIV_ROLES owa.vc_arr;
L_PRIV_PATTERNS owa.vc_arr;
L_PRIV_MODULES owa.vc_arr;
BEGIN
ORDS.CREATE_ROLE(p_role_name => 'MCPAgentRole');

L_PRIV_ROLES( 1 ) := 'MCPAgentRole';
L_PRIV_MODULES( 1 ) := 'mcp';
ORDS.DEFINE_PRIVILEGE(
P_PRIVILEGE_NAME => 'blog.peterobrien.MCPServerExamplePriv',
P_ROLES => L_PRIV_ROLES,
P_PATTERNS => L_PRIV_PATTERNS,
P_MODULES => L_PRIV_MODULES,
P_LABEL => 'MCP Server Example Privilege',
P_DESCRIPTION => 'Restricts access to the example MCP Server module'
);
END;
/

Make sure the MCP client you use supports this flow (not all do—see Limitations below).

Speaking of clients, we must now define the OAuth2 client which will get access to this protected service. Note that in this case, in order to be able to use the access token for longer, we’re setting the token duration to 8 hours. This example specifies ChangeMe as the secret to use so now is a good time to choose another value or examine the ORDS_SECURITY documentation for other options for generating secrets.

DECLARE
l_client_cred ords_types.t_client_credentials;
BEGIN
l_client_cred := ORDS_SECURITY.REGISTER_CLIENT(
p_name => 'MCPClient',
p_grant_type => 'client_credentials',
p_description => 'MCP Client to the example MCP Server.',
p_client_secret => ords_types.oauth_client_secret(p_secret=>'ChangeMe'),
p_support_email => 'test@example.org',
p_token_duration => 28800);

ORDS_SECURITY.GRANT_CLIENT_ROLE(
p_client_name => 'MCPClient',
p_role_name => 'MCPAgentRole');
COMMIT;
sys.dbms_output.put_line('CLIENT_ID:' || l_client_cred.client_key.client_id);
sys.dbms_output.put_line('CLIENT_SECRET:' || l_client_cred.client_secret.secret);
END;
/

That will create a client and output the generated CLIENT_ID and CLIENT_SECRET. You will have to remember these values because they are needed to get an access token. This can be achieved through curl:

curl -i -k --user <client_id>:<client_secret> --data "grant_type=client_credentials" https://<my ORDS server>/ords/ords_concert_app/oauth/token

Replace the variables as appropriate. For example:

curl -i -k --user IV2YsD5Z0sr8_Wvgd0U1jQ..:ChangeMe --data "grant_type=client_credentials" https://c9abzixuw5nq9si-kop3yqis71qcmbx2.adb.eu-frankfurt-1.oraclecloudapps.com/ords/ords_concert_app/oauth/token

That will give you an access token which you will use later.

{
"access_token":"8kbzUqtgsDrCTNKB6Xb32w",
"token_type":"bearer",
"expires_in":28800
}

🧪 Deploying and Testing the MCP Server

🛰️ Deploy to Oracle Autonomous Database

This handler works with Autonomous Database using their hosted ORDS with 23ai. That’s the simplest and most convenient to get started with this example although you can run this on your own infrastructure.

🧰 Test with Tools

You can test your MCP server using the following tools but there are other MCP clients available:

  • MCP Inspector: Enter the /mcp/server URL and token to explore responses interactively.
  • 🤖 Claude Desktop (with mcp-remote): Add your server as a remote tool and use get_events inside Claude chat.

✅ MCP Inspector

The MCP Inspector is a browser based tool for testing and debugging MCP servers. It provides an interactive interface and proxy mechanism that simplifies the development, inspection, and validation of MCP servers before integrating them with AI assistants or production systems. It is the best place to start verifying your handler.

It requires Node.js and is as easy to start as npx @modelcontextprotocol/inspector

Start the inspector – it hosts a mini web server and opens a browser to show the UI
In the UI enter the MCP server address – This is the full path for mine hosted in Frankfurt region
Set the Authorization header with the access token as Bearer Token value
The inspector tool invoking your hosted MCP Server implementation

With the Authentication details provided just press the Connect button. As you can see from the above recording you get live data from your MCP Server mle/handler implementation. The invocation of the relevant methods initialize, tools/list and tools/call can be seen in the history.

Now that you know the MCP Server implementation works and complies with the specification it is time to plug this into an AI Assistant which can operate as an MCP Client. One example is Claude Desktop.

🤖 Claude Desktop (with mcp-remote)

Claude is an AI assistant created by Anthropic. It is built to assist with a wide variety of tasks – from answering questions and helping with analysis to writing, coding, math, creative projects, and having conversations on a wide range of topics. Claude.ai refers to the online version of the AI chatbot, while Claude Desktop is a dedicated application for Windows and macOS that allows users to access Claude’s features directly on their computer, offering a more streamlined and efficient experience. Although still in beta mode…Claude Desktop can connect to remote MCP servers, allowing you to access tools and data hosted on the internet. This integration enhances Claude’s capabilities by enabling it to perform tasks using resources from various external services. However, it does have its limitations. For example, although the free version works with local MCP servers to use remote MCP servers directly requires an upgrade to a paid plan.

To work around that we’ll use mcp-remote which will act as a local MCP server proxying to your remote server.

Manage your list of MCP Servers in the Developer section of Claude Desktop Settings

The Edit Config button will open up a file explorer window with your claude_desktop_config.json file selected. Edit that file and add an entry for your MCP Server.

{
  "mcpServers": {
    "ORDS-handler-example": {
      "command": "npx",
      "args": [
	"mcp-remote",
	"https://c4tozyxuw8nq2ja-kgl2qyis29qcbmv2.adb.eu-frankfurt-1.oraclecloudapps.com/ords/ords_concert_app/mcp/server",
	"--header",
	"Authorization:${AUTH_HEADER}"
      ],
      "env": {
        "AUTH_HEADER": "Bearer 8kbzUqtgsDrCTNKB6Xb32w"
      }
    }
  }
}

Note that that you will have to specify your access token and restart Claude Desktop to pick up the change. In the above example the ORDS-handler-example is defined to execute the npx command to run the mcp-remote Node.js package locally. Additional command parameters include the URL for the remote MCP Server and an instruction to include an Authorization header. Note that the access token from earlier is used here.

Once Claude Desktop is restarted, let’s start asking questions. The AI Agent knows that it has an MCP Server available to it that has a tool for retrieving concert events.

At startup Claude Desktop got the list of tools available from the MCP Server

Note that it is best practice to not give an agent unsupervised access to your tools and resources so in the majority of cases, select “Allow once” so that each action is reviewed before executing.

So let’s ask Claude a question:
From the ORDS Sample Application concert data who are the most popular artists with sold out concerts?

Claude gets the live data and performs its analysis on it

And that’s it! You now can now chat with an AI Assistant to gain insight on your ORDS Sample Application concert events, venues and artists.


⚠️ Limitations and Next Steps

🔐 OAuth2 Support Limitations

Some MCP clients—such as Claudedo not fully support OAuth2 client credentials at the time of writing. This may require:

  • Temporarily disabling OAuth2 for local testing.
  • Creating a public ORDS endpoint for development use.

Neither of which are really viable options. Remember the security, security, security statement at the top!

ORDS does not currently provide support for the OAuth2 flow to list identity providers and that reduces the integration options. The approach mentioned in this article to have an access token that lasts longer than the default 1 hour is not practical in the long term. However, you may find an MCP client that works with one of the other OAuth2 flows.

🚫 Lack of Streamable HTTP and Notifications

One limitation of this ORDS mle/javascript MCP server implementation is the absence of streamable HTTP responses and real-time notifications.

  • Streamable HTTP: The MCP specification allows servers to return results progressively, which is useful for long-running operations or large responses. ORDS handlers currently return only complete responses, meaning agents must wait until all processing finishes before receiving data.
  • Notifications: MCP also supports server-initiated notifications to inform clients of changes (e.g., new events becoming available). At present, ORDS does not provide a mechanism for pushing such asynchronous messages, so clients must re-poll endpoints to detect updates.

While these limitations do not prevent basic MCP functionality, they reduce efficiency for agents expecting real-time updates or streamed results.

🔧 Future Enhancements

Here are some possible next steps to build on this example mle/javascript handler:

  • Add a tool for managing events
  • Implement dynamic prompt generation in prompts/list.
  • Add support for downloading resources.
  • Log incoming MCP requests in a table for auditing.
  • Move the logic into an MLE Module so that the JavaScript code is better structured.
  • Explore the use of different OAuth2 security flows which will work with you chosen MCP client.

🧠 Summary

With just a few lines of MLE JavaScript and an ORDS handler definition, you can turn your Oracle database into a lightweight MCP HTTP server.

This approach allows Oracle developers to publish tools, prompts, and data to modern AI tools like Claude—without deploying external services. Moreover, the data and database resources that are shared is limited to what is specifically coded for in your handler.

📚 Continue exploring: More MLE Articles, https://github.com/oracle/mcp and Model Context Protocol server for the Oracle Database

🎉 Data Science in Concert: Inspect Data from the ORDS Sample Application with R!

  1. Let’s come to an understanding…
    1. Understanding R
    2. Understanding Oracle REST Data Services (ORDS)
      1. Key Features of ORDS:
    3. Understanding the ORDS Sample Application
    4. Understanding OCI and the Autonomous Database
    5. Understanding the scripts
  2. Java Database Connectivity (JDBC)
    1. Using JDBC in R
    2. Load event_views data over JDBC
      1. Pros
      2. Cons
  3. REST Enabled SQL Export
    1. Load event_views data over REST Enabled SQL Export
      1. Pros
      2. Cons
  4. Secured REST Enabled view
    1. Load event_views data using secured REST Enabled view
      1. Pros
      2. Cons
  5. Further Reading and Resources
  6. Conclusion

Now more than ever, data insights are crucial in driving informed decision-making and improving operational efficiency in the information age. The role of data science in uncovering these insights is more and more important, as it enables organisations to harness vast amounts of information effectively. In this article, I will showcase three ways to obtain concert event data from the Oracle REST Data Services (ORDS) Sample Application, stored in a hosted Autonomous Database, using R programming.

  • Oracle JDBC
  • REST Enable SQL Export
  • Secured REST Enabled view

These approaches have pros and cons. In all three R scripts we go through the data retrieval process and use R’s capabilities for data analysis and visualisation. More specifically, we’ll have an R programming language script which loads live Sample Application event data from the Autonomous Database straight into the R environment and produce a pie chart of the number of events in each city.

Let’s come to an understanding…

Understanding R

RStudio IDE is an integrated development environment for R

R is a powerful programming language and environment specifically designed for statistical computing and data analysis. Widely used by data scientists, statisticians, and researchers, R offers a rich ecosystem of packages and libraries that facilitate a broad range of data manipulation, visualisation, and modelling tasks. Its syntax is intuitive, making it accessible for newcomers, while its flexibility allows advanced users to create custom solutions for complex analytical challenges. With strong support for data visualisation and robust statistical capabilities, R has become a go-to tool for transforming raw data into meaningful insights. Additionally, the active R community continuously contributes to its development, ensuring that users have access to the latest advancements in data science methodologies.

Most people programming with R will use RStudio. It provides cross platform consistency, a clean visual interface to the R environment and simplifies development steps ( automatically importing referenced libraries for example ). This article will assume you are running the provided R file in RStudio. Desktop IDE or hosted, the steps are the same.

Install R and RStudio locally or use the hosted RStudio from posit

Understanding Oracle REST Data Services (ORDS)

Oracle REST Data Services (ORDS) enables developers to easily expose Oracle Database data and functionality as RESTful web services. ORDS simplifies the process of building modern web applications by providing a straightforward interface for accessing database resources through standard HTTP methods.

Key Features of ORDS:

  • RESTful APIs: Easily create and manage RESTful APIs for your Oracle Database objects.
  • Integration with SQL, PL/SQL & JavaScript: Use SQL, PL/SQL or JavaScript directly within your APIs to access and manipulate data.
  • Secure: With OAuth 2.0 integration there are multiple options for authentication and authorisation flows.

Understanding the ORDS Sample Application

The Oracle REST Data Services (ORDS) Sample Application serves as a practical demonstration of best practice in building scalable and secure applications using ORDS to expose Oracle Database functionality through RESTful APIs. Designed to help developers to understand ORDS capabilities, this sample application showcases how to define RESTful services securely and interact with them using modern web application development practices. Primarily react and remix in this case.

  • React is a popular open-source JavaScript library developed by Facebook for building user interfaces, particularly for single-page applications.
  • Remix, on the other hand, is a modern web framework built on top of React that focuses on providing an enhanced developer experience and performance.
Artist, Concerts and Venues data from your Oracle Database render in a modern web browser interface – Powered by ORDS!

The ORDS Sample Application functionally based around concert-related data, allowing users to subscribe to concerts, artists, and venues while providing administrators with tools to manage these entities. Jeff Smith provides a helpful overview in https://www.thatjeffsmith.com/archive/2024/10/typescript-remix-web-app-using-oracle-database-rest-apis/ along with a data model diagram of the tables involved.

The EVENTS_VIEW provides a Concert focus with additional data about Artists and Venues

Understanding OCI and the Autonomous Database

Oracle Cloud Infrastructure (OCI) is a comprehensive cloud platform that offers a range of services for building, running, and managing applications and databases. One of its standout offerings is the Autonomous Database (ADB), a self-driving, self-securing, and self-repairing database service designed to streamline data management. ADB automates routine tasks like patching, scaling, and tuning, allowing users to focus on data-driven synergies rather than maintenance. It comes in different flavours, such as Autonomous Data Warehouse (ADW) for analytics and Autonomous Transaction Processing (ATP) for transaction-based workloads. With built-in AI and machine learning capabilities, ADB optimises performance and enhances security, making it a robust choice for organisations looking to leverage the power of data with minimal manual intervention. Some of these features are premium but you can still do a lot with the Always Free services in OCI. One of the most significant features of the Autonomous Database is that each database comes with ORDS installed and ORDS servers managed and maintained by Oracle. All though you can run the ORDS Sample Application with any ORDS supported Oracle Database the ease of provisioning a database in OCI with ORDS ready to go makes the whole experience a breeze.

The OCI Free Tier is a way to try out Oracle Cloud Infrastructure (OCI) services without any cost. It gives you access to a set of free cloud resources that are available in two forms: Always Free services and Free Trial credits.

  • Always Free Services: These are free forever and include services like Autonomous Database, Compute VMs, and Object Storage. You can use these to build and run small applications without worrying about costs.
  • Free Trial Credits: When you sign up, you get a set amount of credits to try out any OCI service you want for 30 days. This lets you explore more powerful services and features before deciding if you want to upgrade.

With the OCI Free Tier, you can start experimenting with cloud computing, build projects, and learn how to use different tools and technologies without committing any money upfront.

I use the Always Free services for hosting my instance of the ORDS Sample Application

Understanding the scripts

In this article I walk through the steps of using JDBC, REST Enabled SQL and REST Enabling a view to bring EVENTS_VIEW data into the R environment for analysis and data visualisation.

Each R script will end with the same set of instructions to show a summary of the data and generate a pie chart.

# Display the first few rows of the combined data
head(data)
# Summarize the data to count occurrences of each CITY_NAME
city_counts <- table(data$city_name)
# Create the pie chart
pie(city_counts, 
    main = "Distribution of Events by City", 
    col = rainbow(length(city_counts)), 
    labels = paste(names(city_counts), 
                   "(", city_counts, ")", 
                   sep=""
                  )
   )
The result of the pie function in R

Another R function that you’ll see is askpass which prompts for a secret and does not hold the value as a variable which can be easily interrogated. And now, on with the 3 script options to produce the above chart…

Java Database Connectivity (JDBC)

Java Database Connectivity (JDBC) is a standard way for Java applications to connect to databases, run SQL queries, and get results. It works with different databases, making it a popular choice for accessing data. The more experienced R programmers at this stage will no doubt be thinking that JDBC is not the only horse at the races. There are already a few options for connecting to an Oracle Database and bringing data into an R environment. The commercial edition of RStudio provides a range of database connection packages. JDBC provides a free option which matches nicely with our Free Tier use of OCI.

Free: It’s the perfect price for my experimentation budget!

RJDBC makes it possible to use Java Database Connectivity to talk to any database if you have the JDBC Driver classes

Although JDBC is designed for Java, you can also use it in R with the help of the RJDBC package. This package lets R connect to databases using JDBC drivers, allowing you to run SQL queries and bring the data into R for analysis.

Using JDBC in R

To connect to a database using JDBC in R, you need:

  1. JDBC Driver: A file (usually at least one .jar but more than one might be needed) that helps R talk to the database.
  2. Connection String: A string that provides details on how to connect to the database, such as the database URL, port number, and credentials.

More specifically for the Autonomous Database you will need two JDBC related jars, the database wallet archive file extracted and the TNS name to reference in your connection string.

Let’s build that connection string first and let’s start with looking at the TNS Names for the various service levels available for your database. Since we’ll only be loading in data once we can use the low service. For more details on connection options to your Autonomous Database see https://docs.oracle.com/en-us/iaas/autonomous-database-serverless/doc/connect-preparing.html

Below are some screen shots of the Database connection information for my database in Frankfurt region. I’m going to show them backwards to explain some concepts. TNS name references an entry in a wallet that you will download, Mutual TLS requires a certificate that will also be in that wallet. There’s also some information to say you don’t really need to use Mutual TLS in all cases but it is a good practice and keeps your JDBC setup as simple as it can be.

I’m going to use db201912101106_low as the service to connect to using Mutual TLS.
All the details will be in the client wallet which I must download.
Download your wallet ( as a ZIP file ) to make connecting the database easier.

Once you have the TNS Name and the wallet file extracted to a directory your JDBC connection string is

jdbc:oracle:thin:<tns name>?TNS_ADMIN=<wallet directory>"

For example:

jdbc:oracle:thin:@db201912101106_low?TNS_ADMIN=/cloud/project/db201912101106
Using an OCI Object Storage Pre-Authenticated Request to get the wallet

Just so I could run my script in the hosted RStudio on posit Cloud. I needed that wallet accessible there. To get the zip file there I went through a couple of hoops: upload to OCI object storage, create a PAR for the file, use that PAR to get the file and then extract it into a directory called /cloud/project/db201912101106 because db201912101106 just happens to be the autogenerated name of my database when I created back in 2019 and I haven’t changed it since.

Now that we have a connection string, what about the Oracle JDBC jar? The quickest way is to use wget again to just download the most recent version from Maven Central. See https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc11/23.5.0.24.07 for the jar URL. You’re going to need one more jar because of that wallet and Mutual TLS. Get the corresponding OraclePKI jar too.

Get the ojdbc and oraclepki jars

Which now means you are ready to specify the correct JDBC Driver and Connection String for connecting to your database. For example:

# Specify the path to the JDBC driver
jdbc_driver <- JDBC(driverClass = "oracle.jdbc.OracleDriver",
classPath = "/cloud/project/ojdbc11-23.5.0.24.07.jar:/cloud/project/oraclepki-23.5.0.24.07.jar") # Replace with your path

username <- "ords_sample_app"

# Establish the connection
# Replace TNS Name and wallet location with your actual values
conn <- dbConnect(jdbc_driver,
"jdbc:oracle:thin:@db201912101106_low?TNS_ADMIN=/cloud/project/db201912101106",
username,
askpass::askpass("Enter your password: "))
I did get some errors along the way when I used the wrong JDBC connection, didn’t use the wallet, and did not have Oracle PKI jar in the classpath

Load event_views data over JDBC

Here’s the full ords_sample_app_event_views_jdbc.R script. It requires two libraries and if you are using RStudio either on your desktop or in posit Cloud they, and their dependencies will be imported.

# ------------------------------------------------------------------------------
# File:        ords_sample_app_event_views_jdbc.R
# Description: This script performs data analysis on the ORDS Sample Application
#              events dataset to generate summary statistics and visualizations.
#              Data is loaded from source DB using JDBC.
#
#              JDBC Connection Requires:
#               - JDBC Jars
#               - Autonomous DB Wallet Archive
#
# Author:      https://peterobrien.blog/
# Date:        2024-10-17
# Version:     1.0
# ------------------------------------------------------------------------------
#
# Notes:
# - Requires the packages: askpass,rjdbc
# - Data source: events_view from the ORDS Sample Application database.
# - Authentication: JDBC authentication with prompt for password.
# Load the necessary packages
library(askpass)
library(RJDBC)
# Specify the path to the JDBC driver
jdbc_driver <- JDBC(driverClass = "oracle.jdbc.OracleDriver", 
                    classPath = "/cloud/project/ojdbc11-23.5.0.24.07.jar:/cloud/project/oraclepki-23.5.0.24.07.jar")  # Replace with your path
# Use database username for your ORDS Sample Application.
username <- "ords_sample_app"
# Establish the connection
# Replace TNS Name and wallet location with your actual values
conn <- dbConnect(jdbc_driver, 
                  "jdbc:oracle:thin:@db201912101106_low?TNS_ADMIN=/cloud/project/db201912101106", 
                  username, 
                  askpass::askpass("Enter your password: "))
# Check the connection
if (!is.null(conn)) {
  print("Connection successful!")
  data <- dbGetQuery(conn, "SELECT * FROM events_view")
  dbDisconnect(conn)
  
  # Display the first few rows of the data
  head(data)
  
  # Summarise the data to count occurrences of each CITY_NAME
  city_counts <- table(data$CITY_NAME)
  
  # Create the pie chart
  pie(city_counts, 
      main = "Distribution of Events by City", 
      col = rainbow(length(city_counts)), 
      labels = paste(names(city_counts), "(", city_counts, ")", sep=""))
}

If you leave all the comments out, that is essentially 12 lines of R code to load the data and create both a summary and a pie chart. Set the username variable to the database username for your ORDS Sample Application.

Pros

  • Simple, uncomplicated code
  • Very flexible – You can write any query you like
  • The JDBC driver is fast and efficient

Cons

  • Dependencies such as jars and wallets have to be obtained
  • Too flexible – You can write any data manipulation statement to change the data
  • Relying on sharing database username and password
  • Using a wrong password will lock the account

REST Enabled SQL Export

Export is a little known aspect of the ORDS REST Enabled SQL service that can save files as CSV, JSON, XML and TEXT. The httr package is used to make the POST request, the jsonlite package looks after the JSON formatting and rio is for loading the data.

Load event_views data over REST Enabled SQL Export

Here’s the full ords_sample_app_event_views_rese.R script. It requires more libraries than the JDBC script but again, if you are using RStudio either on your desktop or in posit Cloud they, and their dependencies will be imported.

# ------------------------------------------------------------------------------
# File:        ords_sample_app_event_views_rese.R
# Description: This script performs data analysis on the ORDS Sample Application
#              events dataset to generate summary statistics and visualizations.
#              Data is exported using Oracle REST Data Services feature:
#              REST Enabled SQL Export
#
#              Export instruction comprises:
#               - SQL select statement for data from events_view
#               - Format for export file to be JSON
#
# Author:      https://peterobrien.blog/
# Date:        2024-10-17
# Version:     1.0
# ------------------------------------------------------------------------------
#
# Notes:
# - Requires the packages: askpass, httr, jsonlite, rio
# - Data source: events_view from the ORDS Sample Application database.
# - Authentication: basic authentication with prompt for password.
# Load the necessary packages
library(askpass)
library(httr)
library(jsonlite)
library(rio)
# Define the ORDS REST Enabled SQL Export URL, username, and password to use.
# Change the URL to match your server and
# use database username for your ORDS Sample Application.
username <- "ords_sample_app"
ords_concert_app_url <- paste0("https://mydatabase.adb.myregion.oraclecloudapps.com/ords/", username, "/")
export_url <- paste0(ords_concert_app_url, "_/sql/export")
# Define the export details including select statement and output format
export_instructions <- list(
  statementText = "select * from events_view",
  formatDetails = list(
    format = "JSON"
  )
)
# Format the export_instructions as a string
request_body <- list(
  data = toJSON(export_instructions, auto_unbox = TRUE)
)
# Specify the basic authentication for every HTTP request but prompt for the password at runtime
set_config(authenticate(username, askpass::askpass("Enter your password: ")));
# Define the local file path where you want to save the file
file_path <- tempfile()
# Make a POST request ot ORDS REST Enabled SQL Export
response <- POST(
  url = export_url, 
  body = request_body,
  encode = "form",  # Specifies application/x-www-form-urlencoded encoding
  write_disk(file_path, overwrite = TRUE)
  )
# Check the status code to ensure the request was successful
if (status_code(response) == 200) {
  message("File successfully downloaded to: ", file_path)
} else {
  stop("Failed to download file. Status code: ", status_code(api_response))
}
# Import the downloaded file into R
data <- import(file_path, "json")
# Display the first few rows of the data
head(data)
# Create a frequency table for the city_name column
city_counts <- table(data$city_name)
# Create the pie chart
pie(city_counts, 
    main = "Distribution of events across Cities", 
    col = rainbow(length(city_counts)), 
    labels = paste(names(city_counts), ":", city_counts))

Although we no longer have the overhead of setting up JDBC jars and a wallet file this R script is much more complicated. We’re now at about 25 lines of R code to use the ORDS REST Enabled SQL Export service to run a query and save the output as a JSON file which we then use rio package to load. Remember, if you are going to run this script in your environment you must set the database username for your ORDS Sample Application.

Pros

  • No additional setup such as JARS or Wallet
  • Very flexible – You can write any query you like

Cons

  • Too flexible – You can write any data manipulation statement to change the data
  • Relying on sharing username and password
  • Using a wrong password will lock the account

Secured REST Enabled view

Also referred to as AutoREST, this feature in Oracle REST Data Services (ORDS) makes it incredibly easy to expose database objects as RESTful web services. With AutoREST, tables, views, and PL/SQL packages in your Oracle Database can be instantly accessible via REST APIs without writing any additional code. Moreover, these services can be secured, requiring authorisation to access. See Developing Oracle REST Data Services Applications and ORDS.ENABLE_OBJECT for details but the step is as simple as:

BEGIN
    ORDS.ENABLE_OBJECT(
        P_ENABLED      => TRUE,
        P_SCHEMA      => 'ORDS_CONCERT_APP',
        P_OBJECT      =>  'EVENTS_VIEW',
        P_OBJECT_TYPE      => 'VIEW',
        P_OBJECT_ALIAS      => 'events_view',
        P_AUTO_REST_AUTH      => TRUE
    );
    COMMIT;
END;

Now let’s define an OAuth 2.0 client with the relevant privilege and role to access this service:

BEGIN
    ORDS_METADATA.OAUTH.CREATE_CLIENT(
        P_NAME             => 'RStudio',
        P_GRANT_TYPE => 'client_credentials',
        P_OWNER => 'ORDS_CONCERT_APP',
        P_DESCRIPTION => 'Used for R programming language',
        P_ORIGINS_ALLOWED => '',
        P_REDIRECT_URI => '',
        P_SUPPORT_EMAIL => 'rstudio@ords-examples.com',
        P_SUPPORT_URI => '',
        P_PRIVILEGE_NAMES => 'oracle.dbtools.autorest.privilege.ORDS_CONCERT_APP.EVENTS_VIEW'
    );
    ORDS_METADATA.OAUTH.GRANT_CLIENT_ROLE(
        P_CLIENT_NAME => 'RStudio',
        P_ROLE_NAME => 'oracle.dbtools.role.autorest.ORDS_CONCERT_APP.EVENTS_VIEW'
    );
    COMMIT;
END;

Get the CLIENT_ID and CLIENT_SECRET for the client just created.

SELECT
    CLIENT_ID,
    CLIENT_SECRET
FROM
    USER_ORDS_CLIENTS
WHERE
    NAME = 'RStudio';

That’s the value that we’ll use in the R script to get an access token every time the script is executed.

# Define base URL for all requests
username <- "ords_concert_app"
ords_concert_app_url <- paste0("https://c4tozyxuw8nq2ja-db201912101106.adb.eu-frankfurt-1.oraclecloudapps.com/ords/", username, "/")
# Define your OAuth 2.0 credentials and token endpoint
client_id <- "139ZODb9nig15blwlSlUNQ.."
client_secret <- "DAIdfS8C5Wg2nN9SGxwCfQ.."
token_url <- paste0(ords_concert_app_url, "oauth/token")
# Make the request to get the access token
response <- POST(
  url = token_url,
  authenticate(client_id, client_secret),
  encode = "form",
  body = list(
    grant_type = "client_credentials"
  )
)
# Parse the response to extract the access token
token_data <- content(response, as = "parsed", type = "application/json")
access_token <- token_data$access_token

The access_token variable can be used as the bearer token for any requests to the REST service. By default, it will expire in 1 hour after it is issued.

The other interesting part of this script is handling the paginated response from the REST Enabled view which contains information on how to make subsequent requests until all the data is retrieved. Here’s an example of the response structure:

{
  "items": [
... removed for brevity ...
   ],
  "hasMore":true,
  "limit":25,
  "offset":0,
  "count":25,
  "links":[
    {
"rel":"self",
"href":"https://myadb.com/ords/ords_concert_app/events_view/"
    },
    {
"rel":"first",
"href":"https://myadb.com/ords/ords_concert_app/events_view/"
    },
    {
"rel":"next",
"href":"myadb.com/ords/ords_concert_app/events_view/?offset=25"
    }
  ]
}

The R script will loop through the data in blocks. By default the pagination is 25 rows but we can make that 100 rows at a time:

base_url <- paste0(ords_concert_app_url, "events_view/?limit=100")
all_items <- list()
# Function to fetch data from the API using the next link
fetch_paginated_data <- function(url) {
  # Display the GET request URL
  print(paste("GET ", url))
  response <- GET(url, add_headers(Authorization = paste("Bearer", access_token)))
  data <- content(response, as = "parsed", type = "application/json")
  return(data)
}
# Start the loop to fetch all paginated data
next_url <- base_url
repeat {
  # Fetch the data from the current page
  data <- fetch_paginated_data(next_url)
  
  # Append the items from the current page to the list of all items
  all_items <- c(all_items, data$items)
  
  # Check if there's a "next" link in the response
  next_link <- NULL
  for (link in data$links) {
    if (link$rel == "next") {
      next_link <- link$href
      break
    }
  }
  
  # Exit the loop if there's no more data to fetch
  if (is.null(next_link) || !data$hasMore) {
    break
  } else {
    next_url <- next_link
  }
}
# Convert the list of all items into a data frame
data <- do.call(rbind, lapply(all_items, as.data.frame))

Of course 100 records at a time is not all that efficient but I have it set here so that you can see at least two requests.

Load event_views data using secured REST Enabled view

Here’s the full ords_sample_app_event_views_oauth.R script. It requires less libraries than the REST Enabled SQL Export script because it doesn’t involve exporting the data to a temporary file and then loading it. More importantly though, OAuth 2.0 Client Credential flow is used rather than database username and passwords.

# ------------------------------------------------------------------------------
# File:        ords_sample_app_event_views_oauth.R
# Description: This script performs data analysis on the ORDS Sample Application
#              events dataset to generate summary statistics and visualizations.
#              Data is exported using Oracle REST Data Services features:
#               - Protected REST Enabled object ( events_view in this case)
#               - OAuth Client Credentials
#
# Author:      https://peterobrien.blog/
# Date:        2024-10-17
# Version:     1.0
# ------------------------------------------------------------------------------
#
# Notes:
# - Requires the packages: httr, jsonlite
# - Data source: events_view from the ORDS Sample Application database.
# - Authentication: oauth 2.0 client credentials flow
library(httr)
library(jsonlite)
# Define base URL for all requests
username <- "ords_concert_app"
ords_concert_app_url <- paste0("https://c4tozyxuw8nq2ja-db201912101106.adb.eu-frankfurt-1.oraclecloudapps.com/ords/", username, "/")
# Define your OAuth 2.0 credentials and token endpoint
client_id <- "139ZODb9nig15blwlSlUNQ.."
client_secret <- "DAIdfS8C5Wg2nN9SGxwCfQ.."
token_url <- paste0(ords_concert_app_url, "oauth/token")
# Make the request to get the access token
response <- POST(
  url = token_url,
  authenticate(client_id, client_secret),
  encode = "form",
  body = list(
    grant_type = "client_credentials"
  )
)
# Parse the response to extract the access token
token_data <- content(response, as = "parsed", type = "application/json")
access_token <- token_data$access_token
# Display the access token
print(access_token)
# Initialize the base URL for the API and a list to store all items
# Requesting 100 records at a time
base_url <- paste0(ords_concert_app_url, "events_view/?limit=100")
all_items <- list()
# Function to fetch data from the API using the next link
fetch_paginated_data <- function(url) {
  # Display the GET request URL
  print(paste("GET", url))
  response <- GET(url, add_headers(Authorization = paste("Bearer", access_token)))
  content_data <- content(response, as = "parsed", type = "application/json")
  return(content_data)
}
# Start the loop to fetch all paginated data
next_url <- base_url
repeat {
  # Fetch the data from the current page
  paginated_data <- fetch_paginated_data(next_url)
  
  # Append the items from the current page to the list of all items
  all_items <- c(all_items, paginated_data$items)
  
  # Check if there's a "next" link in the response
  next_link <- NULL
  for (link in paginated_data$links) {
    if (link$rel == "next") {
      next_link <- link$href
      break
    }
  }
  
  # Exit the loop if there's no more data to fetch
  if (is.null(next_link) || !paginated_data$hasMore) {
    break
  } else {
    next_url <- next_link
  }
}
# Convert the list of all items into a data frame
data <- do.call(rbind, lapply(all_items, as.data.frame))
# Display the first few rows of the combined data
head(data)
# Summarize the data to count occurrences of each CITY_NAME
# Note that the column name is lowercase in the REST response
city_counts <- table(data$city_name)
# Create the pie chart
pie(city_counts, 
    main = "Distribution of Events by City", 
    col = rainbow(length(city_counts)), 
    labels = paste(names(city_counts), 
                   "(", city_counts, ")", 
                   sep="")
                  )

Leaving out the comments we’re now looking at about 50 lines to code to

  • Get the access token
  • Retrieve the data over REST calls
  • Output the summary and generate the pie chart

Unlike the other scripts, you should be able to run this “as is” in your R environment even if you do not have your own ORDS Sample Application because the CLIENT_ID, CLIENT_SECRET and URLs are valid. At the time of writing anyway 😀

Running ords_sample_app_event_views_oauth.R in hosted RStudio

Pros

  • No additional setup such as JARS or Wallet
  • Creating a secured service for accessing the specific view is really easy
  • No exposure of database username and password
  • OAuth client access can be revoked at any time
  • Read only view so no risk of manipulation of data

Cons

  • Not as flexible as being able to run any query one wants from R
  • More complicated code to get the access_token AND loop through the paginated responses

Further Reading and Resources

For a deeper understanding of how to utilise ORDS in a modern application context, I recommend checking out Jeff Smith’s article on the ORDS Sample Application. Jeff’s blog is immensely valuable providing insightful tips and a practical approach to building REST APIs effectively.

Conclusion

Integrating R with Oracle REST Data Services provides a powerful toolset for retrieving and analysing data. In this article I detailed three options:

  • JDBC
  • REST Enabled SQL Export
  • REST Services ( AutoREST )

By following the steps outlined in this post, you can set up your R environment to interact with ORDS seamlessly. This integration opens up a wealth of possibilities for data analysis and application development.

Feel free to share your thoughts or questions in the comments below!

RESTful CSV file

An article from earlier this year by Ulrike Schwinn titled Oracle Object Storage Access for all Oracle Databases with DBMS_CLOUD got me thinking about doing more with CSV data because it can be treated as a table through DBMS_CLOUD.CREATE_EXTERNAL_TABLE. In essence, provide secure RESTful access to the data, and even visualise that data through charts.

Very quickly, we’ll go from a uploading a Salary.csv dataset to rendering a chart of Job Titles for the highest education level and permitting RESTful queries on data…

Example chart generated on a sharable URL
Query the CSV data through ORDS RESTful interface

Salary Data

Let’s start with a CSV dataset. For this exercise we’ll use a public domain Salary by Job Title and Country from kaggle.com. The download is a ZIP archive with two files. In this case it is just the Salary.csv which will be used.

This dataset provides a comprehensive collection of salary information from various industries and regions across the globe.
Oracle Object Storage Buckets will be used for storing the Salary.csv

I’m not going into all the details of downloading the archive file, unzipping it, creating a bucket, and uploading the Salary.csv but needless to say, those are important steps to get you this stage…

Dataset is uploaded to my ‘example-bucket’

Note that the file has a URL for accessing and downloading it but that URL is not publicly accessible.

The file can be accessed at this URL, but not publicly

External Table

In the database, the DBMS_CLOUD package can create a table which has it’s data loaded from the CSV file. To get access to object storage from the database we require a credential object through DBMS_CLOUD.CREATE_CREDENTIAL. Details on achieving that are already covered in Ulrike’s Oracle Object Storage Access for all Oracle Databases with DBMS_CLOUD article and my previous article Data move mojo with ORDS 21.3.0. In fact, I’m going to reuse that DATAPUMP_CRED_NAME credential. That’s just out of convenience for me. You be you and do your own thing.

To create the SALARY table we must specify a column list with data types and sizes. That requires figuring out, based on the data and also the description on kaggle.com, what data types makes sense. Some columns are obviously numeric, but it is not always clear how big or small those columns should be.

begin 
    DBMS_CLOUD.CREATE_EXTERNAL_TABLE(table_name=>'SALARY', 
                   credential_name => 'DATAPUMP_CRED_NAME',
                   file_uri_list => 'https://frckdrxj9oco.objectstorage.eu-frankfurt-1.oci.customer-oci.com/n/frckdrxj9oco/b/example-bucket/o/Salary.csv',
                   format => json_object('type' VALUE 'CSV', 'skipheaders' VALUE '1'),
                   column_list =>'Age NUMBER(4,1),
                                  Gender varchar2(6),
                                  Education_Level NUMBER(1),
                                  Job_Title varchar2(100),
                                  Experience NUMBER(4,1),
                                  Salary NUMBER(9,2),
                                  Country varchar2(200),
                                  Race varchar2(100),
                                  Senior NUMBER(1)'
                            );
end;
/

Note that the file_uri_list parameter came directly from the object details panel in Object Storage.

Unless there’s a syntax error, creating the table will always work. You’ll only find out about problems when you run a select query. That’s when the DB attempts to read the file.

If you have done this right…select * from salary returns data from the Salary.csv file.

Chart

ORDS Database Actions has a charts and dashboard feature that we’re going to use now. Charts enable you to create charts from the database. The chart is constructed using the input SQL command. Dashboards enable you to group charts together to create reports. Let’s define a bar chart showing the number of job titles for the highest education level. So that the chart is not too busy I’m limiting the chart to only show job titles that have more than 10 occurrences.

Defining a bar chart for job titles

For simplicity my chart definition says that it is unprotected, so it is publicly available and anyone can see it with the URL. That’s ok with this scenario because the data is in the public domain anyway but please consider the protection on your charts and dashboards.

That URL can be shared and it will render in the browser like this…

Note that the data can be viewed as a chart, or as a table.

Although the chart and its data is available to anyone with the URL, the data is just the data for the chart.

No direct access to the original Salary.csv data, just aggregation from a specific query

For many folks the only way they can create a chart based on a CSV file and share it is through using a spreadsheet and sharing that, which means sharing the underlying data. In this case the data is not shared, just the chart. So far so cool. Now let’s look at sharing that data and allowing clients to query it.

AutoREST

This will be a short section. It is as simple as:

EXEC ORDS.ENABLE_OBJECT(p_object => 'SALARY');

That’s it. The SALARY table is now accessible through a REST service at /ords/<my schema>/salary/. Moreover, the Filter Object query parameter provides a powerful search mechanism through the Salary.csv file. For example, let’s find people in the data set with more than 30 years experience: q={"experience":{"$gt":30}}

Records in the CSV can be search with relational operators!

You can’t do that so easily in a text editor 😁

See Filter Objects in the ORDS Developer Guide for more information. Of course this RESTful URL can be protected to require a authorisation for accessing it too while the chart is left unprotected.

Note that with standard REST Enabled tables you can perform DELETE, GET, POST and PUT methods but with external tables, the only functional method is GET. All others will get a HTTP 555 response with this message: ORA-30657: operation not supported on external organized table.

Conclusion

If you have followed the above steps you have opened up new ways of accessing, sharing and interrogating data in a CSV file.

To reiterate, when using an external table the data comes from files in the Cloud or from files in a directory. The data does not get loaded or copied into a table in the database. Drop in a new Salary.csv file with different data and it is immediately picked up on the next query.

The combination of Object Storage, Autonomous Database and Oracle REST Data Services provide some interesting options for doing more with CSV files. It has its limits though. To be realistic, large volumes of data and complex queries would require actually loading copies of the data into the database. For that one should consider Autonomous Database Data Studio to load the data. You can still use ORDS as above to share the data as charts or expose a RESTful interface to the data.

This article was inspired by Ulrike’s January blog post. More excellent articles on data management can be found at https://blogs.oracle.com/authors/ulrike-schwinn. Well worth a look.

Get started with Oracle REST Data Services (ORDS) and Docker

  1. Overview
  2. Autonomous Database
    1. Oracle Content Delivery Network
  3. ORDS Latest on Docker
    1. Dockerfile for ORDS Entrypoint
  4. Docker volume for ORDS configuration
    1. Configuration for Customer Managed ORDS
  5. Start it up!
    1. Verify
  6. Conclusion

Overview

Welcome to the third instalment of my series on using Oracle REST Data Services (ORDS), NGINX, Docker, SSL and Autonomous Database! In this article, I will show you how to quickly get started using ORDS and Docker. Together we will walk through the basics of building the Docker image, storing configuration in a Docker volume, running multiple ORDS instances and balancing the load using NGINX. With the help of this guide, you will be able to have a load balanced Customer Managed ORDS with Autonomous Database up and running in no time. To recap on the previous articles:

  • Load Balancing ORDS with NGINX introduced the concept of load balancing and the most basic of configurations to get started with NGINX running in docker. That was entirely using HTTP as the transport protocol.
  • HTTPS Load Balance: NGINX & ORDS took that a step further by using a self signed certificate so that the traffic between client and server was over the more secure HTTPS protocol. That was with ORDS instances running on port 8080 and 8090.

Autonomous Database – hosted and managed for free

Autonomous Database

In this article the ORDS instances will be running in Docker and sharing a configuration for an Autonomous Database hosted on Oracle Cloud Infrastructure Free Tier resources. The prerequisite for this article is an understanding of Installing and Configuring Customer Managed ORDS on Autonomous Database. The database has ORDS and APEX already installed. However, the credentials for ORDS Runtime user and PLSQL Gateway user are not known so the ords install adb command instruction will be used to create and configure additional users in the database to be used by our new ORDS instances.

Oracle Content Delivery Network

In the previous article we had the APEX images in the global/doc_root directory. It is much easier to not have to configure an ORDS instance to serve those static files and to use the Oracle Content Deliver Network instead. One should note that by default, the APEX installation in the Autonomous Database does not use the Oracle CDN for the APEX static resources.  So if you have not done so already, use Oracle CDN for the APEX images. The URL to use will depend on the version of APEX in use. At the time of writing, that is APEX 22.2.0. Once you have made this change the next APEX upgrade will keep the IMAGE_PREFIX parameter in synch. See
https://support.oracle.com/epmos/faces/DocumentDisplay?id=2817084.1 and https://blogs.oracle.com/apex/post/running-customer-managed-ords-on-autonomous-database-heres-how-to-get-ready-for-apex-211-upgrade for more information on using Oracle CDN with APEX

begin
  apex_instance_admin.set_parameter(
  p_parameter => 'IMAGE_PREFIX',
  p_value => 'https://static.oracle.com/cdn/apex/22.2.0/' );
  commit;
end;

ORDS Latest on Docker

As shown in the previous article it is already straight forward to use ORDS from the command line to configure and run in standalone mode. In doing so, you are satisfying the most fundamental requirement for ORDS by providing a supported Java Runtime Environment for it to run in. Running ORDS in Docker takes care of that dependancy and provides a consistent structure. For your convenience, I have defined a Dockerfile to create an image with the latest version of ORDS built in. It does require the JDK 17 image from Oracle Container Registry jdk repository. To use images from the Oracle Container Registry you must first sign in using your Oracle Account to accept the license agreement for the Oracle image. Once you have accepted the licence, follow the installation instructions on the page to login and pull the jdk:17 image:

> docker login container-registry.oracle.com
Username: <Oracle Account Username>
Password: <Oracle Account Password>
Login successful.

> docker pull container-registry.oracle.com/java/jdk:17
17: Pulling from java/jdk
0b93191bf088: Pull complete 
f5a748ad7565: Pull complete 
004350aa024a: Pull complete 
Digest: sha256:6ca4abe688e437a2189e54e42fc8325ed9d7230286f61bfb0199b8e693423f70
Status: Downloaded newer image for container-registry.oracle.com/java/jdk:17
container-registry.oracle.com/java/jdk:17

That will pull into your local Docker repository the most recent Oracle JDK 17 build.

Dockerfile for ORDS Entrypoint

The configuration is quite simple. A couple of folders are exposed for providing configuration and library extensions. That configuration directory is essential but in the majority of cases, customers do not have custom extensions so the lib/ext folder will not be used in this article. Similarly, although the Dockerfile specifies that both port 8080 and port 8443 should be exposed, we will only be using port 8080 for HTTP traffic in this article. It is NGINX that will be terminating the HTTPS traffic before routing upstream to our ORDS instances.

The Dockerfile we’ll use to create the ORDS image is available at ORDS_Latest_Dockerfile. Contents listed below.

#
# Defines a docker image, based on the Oracle JDK image, to run Oracle REST Data Services. During the image building 
# process the most recent version of ORDS will be automatically downloaded and extracted.
#
# Volumes for configuration and lib/ext are defined.
#
# docker run -p 8080:8080 -v ords-adb-config:/opt/ords-config/ -v ords-lib-ext:/opt/ords/latest/lib/ext ords-latest/oraclejdk
#
# See https://peterobrien.blog/ for more information and examples.
#
FROM container-registry.oracle.com/java/jdk:17
MAINTAINER Peter O'Brien
ENV LATEST=/opt/ords-latest/
ENV CONFIG=/opt/ords-config/
WORKDIR $LATEST
ADD https://download.oracle.com/otn_software/java/ords/ords-latest.zip $LATEST
RUN jar xf ords-latest.zip; rm ords-latest.zip; chmod +x bin/ords
VOLUME $LATEST/lib/ext/ $CONFIG
EXPOSE 8080
EXPOSE 8443
WORKDIR $CONFIG
ENTRYPOINT ["/opt/ords-latest/bin/ords"]
CMD ["serve"]

To use the above Dockerfile and build an image locally called ords-latest/oraclejdk use the following command

> docker build --tag ords-latest/oraclejdk \
https://gist.githubusercontent.com/pobalopalous/fc6ab4ee777f6b7f32a400e920df682d/raw/ORDS_Latest_Dockerfile

Downloading build context from remote url: https://gist.githubusercontent.com/pobalopalous/fc6ab4ee777f6b7f32a400e920df682d/raw/ORDS_Latest_Dockerfile [===============Downloading build context from remote url: https://gist.githubusercontent.com/pobalopalous/fc6ab4ee777f6b7f32a400e920df682d/raw/ORDS_Latest_Dockerfile [==================================================>]     878B/878B
Downloading build context from remote url: https://gist.githubusercontent.com/pobalopalous/fc6ab4ee777f6b7f32a400e920df682d/raw/ORDS_Latest_Dockerfile [==================================================>]     878B/878B
Sending build context to Docker daemon   2.56kB
Step 1/13 : FROM container-registry.oracle.com/java/jdk:17
 ---> 4945318567e9
Step 2/13 : MAINTAINER Peter O'Brien
 ---> Using cache
 ---> 1bb5b3ea1d92
Step 3/13 : ENV LATEST=/opt/ords-latest/
 ---> Using cache
 ---> 4798e9cbc8d1
Step 4/13 : ENV CONFIG=/opt/ords-config/
 ---> Using cache
 ---> a1f6e0bf441c
Step 5/13 : WORKDIR $LATEST
 ---> Using cache
 ---> 1b961db4ee2d
Step 6/13 : ADD https://download.oracle.com/otn_software/java/ords/ords-latest.zip $LATEST
Downloading [==================================================>]  94.62MB/94.62MB
 ---> Using cache
 ---> f6d009ada2f1
Step 7/13 : RUN jar xf ords-latest.zip; rm ords-latest.zip; chmod +x bin/ords
 ---> Using cache
 ---> f6d20c737486
Step 8/13 : VOLUME $LATEST/lib/ext/ $CONFIG
 ---> Using cache
 ---> fde34609973e
Step 9/13 : EXPOSE 8080
 ---> Using cache
 ---> 77933cb86baa
Step 10/13 : EXPOSE 8443
 ---> Using cache
 ---> 094fc3d8332b
Step 11/13 : WORKDIR $CONFIG
 ---> Using cache
 ---> 2d1b41e2c6f0
Step 12/13 : ENTRYPOINT ["/opt/ords-latest/bin/ords"]
 ---> Using cache
 ---> 9974ac45526d
Step 13/13 : CMD ["serve"]
 ---> Using cache
 ---> 4cbe74b80bb5
Successfully built 4cbe74b80bb5
Successfully tagged ords-latest/oraclejdk:latest

You now have an image in your local Docker repository ready to run. Note that the base image is an Oracle JDK 17 one. You can of course change that to something else. At the time of writing, only Oracle JDK 11 and 17 are supported Java Runtime Environments for ORDS.

Docker volume for ORDS configuration

Now it’s time to start putting the ORDS configuration together. In the previous article I outlined a configuration folder structure which was defined on the host computer file system. We are deviating from that in two ways. First, as outlined above, we will not have any APEX images in the global/doc_root directory because we are using the Oracle CDN with APEX in the hosted Autonomous Database. Second, we’re using a Docker volume, rather than the local filesystem, to store all the configuration.

Docker volumes are an ideal way to persist data generated by and used by Docker containers. They provide several benefits, such as:

  • Data isolation: Docker volumes are independent of the underlying filesystem, which ensures that the data persists even if the container is moved to a different host.
  • Easy deployment: Docker volumes can be shared across multiple containers and hosts, making it easy to deploy applications in different environments.
  • Data security: Docker volumes are stored outside the container, so they are not affected by any changes within the container. This ensures that your data remains secure and consistent.
  • Performance: Docker volumes are stored on the host system, which can be faster than using shared storage. This can improve the performance of your containers.

The first configuration item for a Customer Managed ORDS on Autonomous Database is the wallet and getting that wallet zip file into the Docker volume involves a few steps that may not be intuitive if you are not familiar with Docker volumes. You see, to copy a file into a Docker volume, one must do that through a running container, but before we have a running container, we must first create the volume.

> docker volume create ords-adb-config
ords-adb-config

Let’s assume you have downloaded your Autonomous Database wallet zip file to your ~/Downloads directory. For example: ~/Downloads/Wallet_DB202301101106.zip. We’re going to put it in the ords-adb-config volume as /opt/ords-config/Wallet_Autonomous.zip but first we must start a container to use it.

> docker run --detach --rm --name ords-latest \
             -v ords-adb-config:/opt/ords-config/ \
             ords-latest/oraclejdk

Note that we’re not mapping to any ports and once we’re finished with this container it will be removed. Let’s copy that wallet zip file. We know the name of the container is ords-latest because that’s the name we gave in the docker run command. Your wallet file name will be different but we’re going to copy it to /opt/ords-config/Wallet_Autonomous.zip to keep things simple for subsequent commands. If you are going to have multiple pools, you will have to have distinct filenames.

> docker cp ~/Downloads/Wallet_DB202301101106.zip \
            ords-latest:/opt/ords-config/Wallet_Autonomous.zip

That ords-latest container is no longer required. It only came into existence to allow you to copy the zip file. When you stop the container it should be removed automatically.

> docker stop ords-latest

Configuration for Customer Managed ORDS

The wallet zip file is a good start but now it’s time to run through the Customer Managed ORDS with Autonomous Database install step which will create additional users in the database and store the necessary pool settings in the ords-adb-config Docker volume. We’re going to use the non-interactive silent installation so will have to provide the passwords for the existing ADMIN user, and the two users to create. Referring back to the ORDS documentation, the ords install adb command is…

ords install adb --admin-user <DATABASE USER> \
                 --db-user <DATABASE USER> \
                 --gateway-user <DATABASE USER>
                 --wallet <PATH TO ZIP FILE>
                 --wallet-service-name <NET SERVICE NAME>
                 --feature-sdw <BOOLEAN>
                 --feature-db-api <BOOLEAN> \
                 --feature-rest-enabled-sql <BOOLEAN> \
                 --password-stdin < adbs_passwords.txt

Let’s create that file with the passwords to use. We can delete it once the ords install adb command completes. Create the adbs_passwords.txt file with three passwords on each line:

<PASSWORD FOR admin-user>
<PASSWORD FOR db-user>
<PASSWORD FOR gateway-user>

In my case the adbs_passwords.txt file looks like this:

MyADMIN_password_1s_a_s@cret
K@@PThe!RuntimeUserPr1vate
G@teWayUs3r!IsHidden

With my passwords file I can pass all these details in one command as I run it in Docker. Note that the entire command line also specifies -i which instructs the docker engine to use standard input ( STDIN ) for the container.

> docker run -i -v ords-adb-config:/opt/ords-config/ \
ords-latest/oraclejdk \
install adb \
--admin-user ADMIN \
--db-user ORDS_PUBLIC_USER2 \
--gateway-user ORDS_PLSQL_GATEWAY2 \
--wallet /opt/ords-config/Wallet_Autonomous.zip \
--wallet-service-name db202301101106_low \
--feature-sdw true \
--feature-db-api true \
--feature-rest-enabled-sql true \
--password-stdin < adbs_passwords.txt

ORDS: Release 22.4 Production on Mon Mar 06 09:52:30 2023

Copyright (c) 2010, 2023, Oracle.

Configuration:
/opt/ords-config/

Oracle REST Data Services - Non-Interactive Customer Managed ORDS for Autonomous Database
Connecting to Autonomous database user: ADMIN TNS Service: db202301101106_low
Retrieving information
Checking Autonomous database user: ORDS_PLSQL_GATEWAY2 TNS Service: db202301101106_low
The setting named: db.wallet.zip.path was set to: /opt/ords-config/Wallet_Autonomous.zip in configuration: default
The setting named: db.wallet.zip.service was set to: db202301101106_low in configuration: default
The setting named: db.username was set to: ORDS_PUBLIC_USER2 in configuration: default
The setting named: db.password was set to: ****** in configuration: default
The setting named: plsql.gateway.mode was set to: proxied in configuration: default
The setting named: feature.sdw was set to: true in configuration: default
The global setting named: database.api.enabled was set to: true
The setting named: restEnabledSql.active was set to: true in configuration: default
The setting named: security.requestValidationFunction was set to: ords_util.authorize_plsql_gateway in configuration: default
2023-03-06T09:52:38.256Z INFO Connecting to Autonomous database user: ADMIN TNS Service: db202301101106_low
------------------------------------------------------------
Date : 06 Mar 2023 09:52:38
Release : Oracle REST Data Services 22.4.4.r0411526

Database : Oracle Database 19c Enterprise Edition
DB Version : 19.18.0.1.0
------------------------------------------------------------
Container Name: C4TOSECRETNQ2JA_DB202301101106
------------------------------------------------------------

[*** script: ords_runtime_user.sql]

PL/SQL procedure successfully completed.

2023-03-06T09:52:42.532Z INFO ... Verifying Autonomous Database runtime user
[*** script: ords_gateway_user.sql]

PL/SQL procedure successfully completed.

2023-03-06T09:52:43.674Z INFO ... Verifying Autonomous Database gateway user
2023-03-06T09:52:43.675Z INFO Completed configuring for Customer Managed Oracle REST Data Services version 22.4.4.r0411526. Elapsed time: 00:00:05.407

[*** Info: Completed configuring for Customer Managed Oracle REST Data Services version 22.4.4.r0411526. Elapsed time: 00:00:05.407
]
2023-03-06T09:52:43.720Z INFO To run in standalone mode, use the ords serve command:
2023-03-06T09:52:43.723Z INFO ords --config /opt/ords-config serve
2023-03-06T09:52:43.723Z INFO Visit the ORDS Documentation to access tutorials, developer guides and more to help you get started with the new ORDS Command Line Interface (http://oracle.com/rest).

Note that because the Docker entrypoint for the image that we built earlier was specified as /opt/ords-latest/bin/ords which means we can run the ords command line with any supported commands and arguments.

Don’t forget to rm adbs_passwords.txt. You do not need it anymore.

In summary, we’ve just told ORDS to use the wallet zip file and the ADMIN credentials to connect to the hosted service, create some users and persist configuration details on the ords-adb-config volume. The docker container exits because the command is complete. You can see the ORDS configuration by running the ords config list command.

> docker run -v ords-adb-config:/opt/ords-config/ \
             ords-latest/oraclejdk config list

ORDS: Release 22.4 Production on Mon Mar 06 19:07:27 2023

Copyright (c) 2010, 2023, Oracle.

Configuration:
  /opt/ords-config/

Database pool: default

Setting                              Value                                    Source     
----------------------------------   --------------------------------------   -----------
database.api.enabled                 true                                     Global     
db.password                          ******                                   Pool Wallet
db.username                          ORDS_PUBLIC_USER2                        Pool       
db.wallet.zip.path                   /opt/ords-config/Wallet_Autonomous.zip   Pool       
db.wallet.zip.service                db202301101106_low                       Pool       
feature.sdw                          true                                     Pool       
plsql.gateway.mode                   proxied                                  Pool       
restEnabledSql.active                true                                     Pool       
security.requestValidationFunction   ords_util.authorize_plsql_gateway        Pool       

No doubt you will remember this from the previous article about HTTPS and NGINX with ORDS. There’s one more configuration setting to address. That’s to tell ORDS what header key / value pair to use to trust that the request was received by a load balancer over HTTPS even though ORDS is receiving traffic over HTTP.

docker run -v ords-adb-config:/opt/ords-config/ \
  ords-latest/oraclejdk \
  config set security.httpsHeaderCheck "X-Forwarded-Proto: https"

At this point we have a Docker volume ords-adb-config which has all the configuration settings necessary to run one or more Customer Managed ORDS with Autonomous Database instances as we see fit.

Start it up!

From the previous article you have a NGINX configuration that you have running in Docker to talk to two ORDS instances listening on port 8080 and 8090. Now let’s replace those ORDS instances with ones running in Docker with the above ords-adb-config Docker volume. You can leave the NGINX container running but if you have not done so already, shutdown those ORDS instances.

Up until now, we have not specified a container name when running ORDS in Docker. For convenience, we’ll refer to the container listening on port 8080 as ords-latest-8080 and the other one as ords-latest-8090.

> docker run --detach --rm --name ords-latest-8080 \
             -p 8080:8080 \
             -v ords-adb-config:/opt/ords-config/ \
             ords-latest/oraclejdk
9e0d8ec541bc5c360c7e156153cfd8f6437d61ab2d4f627c887f03d7384a56e6

> docker run --detach --rm --name ords-latest-8090 \
             -p 8090:8080 \
             -v ords-adb-config:/opt/ords-config/ \
             ords-latest/oraclejdk
7a36de7fb14e54710181c43caa6fb2aa9dfdf013f5afa32405378da61a9a13e0

Verify

To check that they are up and running have a look at the process list.

> docker ps
CONTAINER ID   IMAGE                   COMMAND                  CREATED        STATUS        PORTS                                                                      NAMES
2c11ababaf1b   ords-latest/oraclejdk   "/opt/ords-latest/bi…"   4 hours ago    Up 4 hours    8443/tcp, 0.0.0.0:8090->8080/tcp, :::8090->8080/tcp                        ords-latest-8090
7fd8c821be64   nginx                   "/docker-entrypoint.…"   6 hours ago    Up 6 hours    0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   optimistic_kilby
9e0d8ec541bc   30e6e561dc7d            "/opt/ords-latest/bi…"   6 hours ago    Up 6 hours    0.0.0.0:8080->8080/tcp, :::8080->8080/tcp                                  ords-latest-8080

Also use the docker logs command to keep track of the activity and status. We’ve given specific names for the two ORDS containers so we can refer to them directly,

> docker logs -f ords-latest-8080

ORDS: Release 22.4 Production on Mon Mar 06 13:48:57 2023

Copyright (c) 2010, 2023, Oracle.

Configuration:
  /opt/ords-config/

2023-03-06T13:48:58.335Z INFO        HTTP and HTTP/2 cleartext listening on host: 0.0.0.0 port: 8080
2023-03-06T13:48:58.389Z INFO        Disabling document root because the specified folder does not exist: /opt/ords-config/global/doc_root
2023-03-06T13:49:07.009Z INFO        Configuration properties for: |default|lo|
...
Mapped local pools from /opt/ords-config/databases:
  /ords/                              => default                        => VALID     


2023-03-06T13:49:14.790Z INFO        Oracle REST Data Services initialized
Oracle REST Data Services version : 22.4.4.r0411526
Oracle REST Data Services server info: jetty/10.0.12
Oracle REST Data Services java info: Java HotSpot(TM) 64-Bit Server VM 17.0.6+9-LTS-190
> docker logs -f ords-latest-8090      

ORDS: Release 22.4 Production on Mon Mar 06 13:56:22 2023

Copyright (c) 2010, 2023, Oracle.

Configuration:
  /opt/ords-config/

2023-03-06T13:56:23.011Z INFO        HTTP and HTTP/2 cleartext listening on host: 0.0.0.0 port: 8080
2023-03-06T13:56:23.066Z INFO        Disabling document root because the specified folder does not exist: /opt/ords-config/global/doc_root
2023-03-06T13:56:32.683Z INFO        Configuration properties for: |default|lo|
...
Mapped local pools from /opt/ords-config/databases:
  /ords/                              => default                        => VALID     


2023-03-06T13:56:32.683Z INFO        Oracle REST Data Services initialized
Oracle REST Data Services version : 22.4.4.r0411526
Oracle REST Data Services server info: jetty/10.0.12
Oracle REST Data Services java info: Java HotSpot(TM) 64-Bit Server VM 17.0.6+9-LTS-190

As a reminder, to check the logs for the NGINX container you’ll have to specify the container name that was allocated at runtime. In my case it is optimistic_kilby.

> docker logs -f optimistic_kilby
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
 to: 192.168.5.2:8080 {GET / HTTP/1.1} upstream_response_time 0.155 request_time 0.155
172.17.0.1 - - [06/Mar/2023:13:52:58 +0000] "GET /ords/ HTTP/1.1" 301 169 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"
 to: 192.168.5.2:8090 {GET /ords/ HTTP/1.1} upstream_response_time 2.356 request_time 2.356
 to: 192.168.5.2:8080 {GET /ords/f?p=4550:1:117375695883225::::: HTTP/1.1} upstream_response_time 2.101 request_time 2.101
 to: 192.168.5.2:8090 {GET / HTTP/1.1} upstream_response_time 0.006 request_time 0.006
172.17.0.1 - - [06/Mar/2023:13:53:03 +0000] "GET /ords/ HTTP/1.1" 301 169 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"
 to: 192.168.5.2:8080 {GET /ords/ HTTP/1.1} upstream_response_time 2.045 request_time 2.045

From the NGINX logs you can see that traffic is being alternated between the ORDS instance listening on port 8080 and 8090.

As before, the request goes over HTTPS through NGINX and routed upstream to an ORDS instance.

You can stop a container and restart it to confirm the failover works as before.

Conclusion

Building on the previous articles you now have both NGINX and ORDS running in Docker and using an Autonomous Database. This is still effectively a development / proof of concept environment because the DNS entry and SSL certificate are not properly setup to operate seamlessly. The nginx.conf is hardcoded with two upstream ORDS instances to use and the containers are using two specific ports on the host machine. In the next article we’ll look at using docker compose so that we have more flexibility around this.

Using the Dockerfile from this article you have created an ORDS image which can be used to run ORDS commands and update your configuration in ords-adbs-config. As an additional exercise you can look into increasing pool size (jdbc.MaxLimit) and doing a rolling restart of the two ORDS docker containers to pick up that configuration change.

Leave a comment and let me know how you get on.

Data move mojo with ORDS 21.3.0

Now that ORDS 21.3.0 is available in all Oracle Autonomous Database regions there are even more options for copying data to, from and within Oracle Cloud. This release contains two important enhancements to the RESTful Data Pump services provided by the ORDS Database API

  • Any REST Enabled database account can access the Data Pump services. Of course, the database account in question still needs to have the necessary permissions in the database to perform the actions but before this release, only administrators could access the Database API Data Pump services.
  • Use Oracle Cloud Object Storage for export / import file destination / source. The databases must support the interaction with Object Storage though DBMS_CLOUD package. This is available out of the box with Oracle Autonomous Database.

Options? I got plenty

With these new capabilities, particularly with the Object Storage integration, moving to a hosted Oracle Autonomous Database is even easier. Not only that, moving data between existing Autonomous Databases is now easier too. To demonstrate, this article provides a step by step walk through of copying tables from a 19c Autonomous Database to a 21c Autonomous Database.

One request to export and one request to import!

Copying data between Autonomous Databases made easy

Just to reiterate the demonstration scenario: non-administrator database users will copy tables and data from hosted 19c database to a hosted 21c database with hosted object storage as the intermediary. That’s achieved through one ORDS Database API Data Pump request to export and one ORDS Database API Data Pump request to import.

Source Database -> Oracle Cloud Object Storage -> Target Database

Prepare to win!

To make this all happen so easily does require a little preparation of key components. Here’s what is required…

  • Two Autonomous databases. For this demonstration they are in the same region, but the Object Storage bucket could be configured to replicate to another region.
  • Object Storage bucket. A private bucket for file storage in your Oracle Cloud tenancy. The databases will have access to the bucket but will require credentials to do so.
  • Authorisation Token. These are revokable credentials that will be used by both the source and target databases to connect to the Object Storage bucket.
  • Database Account. In both databases a REST Enabled database user is required. It is these user credentials that will be used to make the REST request to ORDS. Also, it is these database users that will be performing the DBMS_DATAPUMP procedure calls. Therefore they will require certain privileges in the database. If they are not Administrator users they must have these privileges granted to them. More on that later.

Note that all of the above: Database, Object Storage and ORDS, are available with the Free Tier account on Oracle Cloud Infrastructure.

Autonomous Databases

For this demonstration there are two Always Free databases in the Frankfurt region. The one named DB 201912101106 database is an Oracle 19c Enterprise Edition database. The one named DB 202109201918 is a the 21c database.

Two Always Free Autonomous Databases

Object Storage Bucket

To keep files for this demonstration separate there is a dedicated Object Storage bucket which can be deleted afterwards. There’s plenty of space available in the Free Tier and the bucket is private but it is a good practice not leave exported data files around long.

Select ‘Buckets’ in the ‘Storage’ section.

The bucket could be called anything but for simplicity in this demonstration it is named: DataPumpBucket

Create an Object Storage Bucket called DataPumpBucket with a few clicks

Auth Token

So that the databases can interact with the private Object Storage bucket they need authentication credentials. This is achieved through creating an Auth Token and storing it as a credential in the databases. Tokens are created for a specific Oracle Cloud user and are managed in the User Settings section for the user. The quickest way there is to select User Settings from the profile menu option in the top right corner.

Once in the User Settings section select the Auth Tokens resources tab and click on the Generate Token button. In the popup dialog enter a name for the token. This name can be anything that helps describe the purpose of the token and makes it easier to identify the token later for deleting it if necessary. What really matters is the generated token value because that is effectively the secret password that will be stored in both the source and target databases. After entering a name, in this case it was DataPumpToken, press the Generate Token button to produce a token value. The value is hidden by default. Copy that value as it will not be retrievable once the dialog box is closed. That token value will be used latter when creating Database Credentials in both databases.

Create an Auth Token and copy the value

Database Account – Source and Target Schemas

The source schema will have tables with data that will be exported to the Object Storage bucket. That user must be created, granted the necessary privileges, REST Enabled, and have the tables created too. With the exception of creating tables, the same actions will be taken for the database account in the target database. In both cases the schema/user will be called DBTST. In the source database, the 19c one, the ADMIN user can create a REST Enabled user through Database Actions.

Creating a standard REST Enabled database user

The same can be done by the ADMIN user in the 21c target database. Once created these users can invoke the Data Pump services in ORDS Database API but won’t achieve much as they do not have the necessary privileges. In both databases the ADMIN user must run the following to grant these privileges.

-- REQUIRED FOR DATA PUMP TO OBJECT STORAGE
GRANT EXECUTE ON DBMS_CLOUD TO DBTST;

-- REQUIRED FOR DATA PUMP LOGS
GRANT READ, WRITE on DIRECTORY DATA_PUMP_DIR to DBTST;

-- THESE EXPLICIT GRANTS ARE REQUIRED FOR DATA PUMP
GRANT CREATE ROLE TO DBTST ;
GRANT CREATE VIEW TO DBTST ;
GRANT CREATE SESSION TO DBTST ;
GRANT CREATE TABLE TO DBTST ;
GRANT CREATE SEQUENCE TO DBTST ;
GRANT UNLIMITED TABLESPACE TO DBTST ;
GRANT CREATE PROCEDURE TO DBTST ;

Note that the above is not necessary for users with the DBA role as those users will have these privileges already.

Database Credentials

Now we get into defining the credentials that both source and target database accounts will use. Although Data Pump can work with global credentials the ORDS Database API currently restricts the use of credentials to just those that the user has defined themselves. In other words, the credential must found by name in USER_CREDENTIALS view and ENABLED. Login to both the source and target database accounts and run this command, using the token value copied from the Auth Token step and the corresponding username…

Defining a credential in the database with OCI username and Auth Token as password

In the above example the credential name given is DATAPUMP_CRED_NAME. It could be called anything. Whatever name is given, this is the name that is used in the request payload to initiate an export or import job later.

Source Tables

The purpose of this exercise is to show data being copied from one source ( 19c Autonomous Database ) to another ( 21c Autonomous Database ) so let’s create some tables and put some data in them. For the source database account run this dpexp_setup.sql script.

Create tables and populate them with data.

Setup a REST client

Any client for sending HTTP requests will do and curl is a common command line utility that is used in many examples. Although ubiquitous curl does have a few drawbacks for REST calls. In particular that it does not format the response structure well. In this demonstration we’ll use the Postman app to make the requests. For convenience we will import the OpenAPI document that is generated by ORDS. The URL for the openapi.json document will be different for both the source and target REST Enabled schemas and we can take advantage of that to have separate API collections.

To get started, import the API document for the source schema and set the Basic Authentication credentials that will be inherited by all the requests in the collection. Rename the collection to make it clearer what it is for.

Import OpenAPI document to Postman

Make your move

Now it’s time to do what we came here for…

  • Export from the source Autonomous Database
  • See the files in Object Storage
  • Import to the target Autonomous Database
  • See the data in the new tables

Export

For this demonstration we’ll export all tables, except for JOB_HISTORY, in the DBTST schema to the bucket called DataPumpBucket. The URL for bucket and the Data Pump filename pattern will be used. You can derive the URL from the OCI console Bucket details page which ends with /DataPumpBucket/objects. Replace the https://console. prefix with https://objectstorage. Then replace the DataPumpBucket/objects suffix with DataPumpBucket/o/ and then add the filename pattern. In this case we’ll go with EXPDAT%U-DBTST.DMP. Oracle Data Pump will create numbered files with that pattern in the bucket.

Initiate a Data Pump Import Job

Now that the Data Pump Job is created its status and log can be retrieved. It’s easy to create a new REST request in Postman just by clicking on the link in the response body. Save the request definition so that the previously defined Basic Authentication can be inherited.

Review the job status and log file

Review Object Storage

Refresh the bucket details page to see that Oracle Data Pump has created files there.

Oracle Data Pump created export files based on the requested filename pattern

Import

Now for getting the data imported to the target database. As before, the Postman app has the Database API document imported as a collection so that the URL and basic authorisation details are defined in one place. Using the Database API we can confirm that this DBTST schema does not have any tables.

Database API – Get all tables with Filter Object query

Next, submit the import request. The credential_name and file_name parameters are the same as the export but operation parameter value should now be IMPORT.

Initiate the Data Pump import job

Now review the Data Pump import job status and confirm that the tables have been created.

Check the job status and see that the tables are created

Now that the tables are there, Database Actions can be used to confirm they have data too.

See the imported data
While in Database Actions you can review Data Pump jobs too!

Conclusion

What has been demonstrated here is just one of the many options now available getting data to, from or between Autonomous Databases. There were a few perquisite steps but once they are in place, exporting and importing can be repeated with just one request each.

Although an on-premise database may not have the DBMS_CLOUD integration for accessing Oracle Cloud Object Storage, the ability for non-administrator users to import and export to their database directory is now available through ORDS Database API.

The demonstration showed exporting and importing tables but don’t forget that through the ORDS Database API Data Pump services you can do much more. Export the full databases, remap a schema on import, and much more. Over to you to take this further in your environment.