Oracle Database Tools Service does a lot and makes a lot of things possible. At its core is the concept of a connection library that allows you to define the connection details for a database and then reuse that definition in a range of scenarios: SQL Worksheet, SQLcl, Java or Python clients and more.
One of those hosted services that also becomes available through the Database Tools Service is ORDS REST Enabled SQL. The SQL Worksheet in Oracle Database Tools Service and Oracle Database Actions both use the ORDS REST Enabled SQL service to run the given SQL statements in the corresponding database. All you need to construct the relevant service URL is the OCID for your Database Tools Connection definition and a little information about the Oracle Cloud region.
REST Enabled SQL Service URL
In the above Frankfurt region the connection OCID is a really long string ocid1.databasetoolsconnection.oc1.eu-frankfurt-1.amaaaaaamdjeo4qagsuv6lv3acwsomething2y26ya
Now that you have the URL for the endpoint you will have to prove to the Database Tools Service who you are. That’s where an API Key is needed. You can define one in your User Settings.
With that API Key you can go to your APEX workspace and record the credential details.
APEX REST Enabled SQL Reference
With the OCI Web Credentials specified in the APEX workspace you can now create a REST Enabled SQL Service reference. This brings together the URL you identified earlier and the credentials that will be used to sign each request.
Off the charts!
So far, so simple. Now let’s use that REST Enabled SQL reference in an APEX application. In this example we’ll use the HR Connection REST Enabled SQL reference to create a bar chart of employee salaries.
What you have seen is database data to APEX chart in a quick and secure manner. Using the Database Tools Service connection details you can revoke the connection, change the underlying database connection details without having to update the APEX application.
Through a series of screenshot images you’ve been introduced to functionality from a range of Oracle products. In summary:
Sometimes it can be hard to get to the root cause of an error reported by software which makes it even more difficult solve it. This org.eclipse.jetty.io.EofException with message GZIP compression failed for a static resource is one of those. There can be any number of root causes with this exception. One of them is memory. Or rather the lack of it.
With ORDS 22.1.0, and later, the default memory Java heap size may not be ideal for your specific work load. Static documents, or even the size of large result sets, read into memory might cause your request processing to hit a limit.
Memory options for Java applications can be set at runtime. To achieve this with ORDS set the JDK_JAVA_OPTIONS environment variable before starting ORDS. For example
You should see output from the Java runtime environment stating that it has applied the options at startup..
NOTE: Picked up JDK_JAVA_OPTIONS: -Xmx512m
ORDS: Release 22.1 Production on Tue May 03 15:00:27 2022
Identifying the values that are applicable to your environment does require some investigation and perhaps some trial and error in some cases. Now you know though, a GZIP compression error can have memory issues as the root cause and you also know how to fix it.
UPDATE! As of ORDS 22.2.0 there is a command to generate web application for deployment to Apache Tomcat or Oracle WebLogic Server. See Deploying Oracle REST Data Services. The below article is informative and perhaps worth your time to get an understanding of web application deployment but there are supported, documented steps for achieving the same thing with the ORDS product. In fact, what is in the ORDS product is better because it also caters for jar content in the /lib/ext/ folder which this article does not cover.
ORDS 22.1.0 introduced a substantial set of changes: Java 11 as minimum, new command line interface and a new configuration directory structure. In fact it is a release that introduces a whole new deployment paradigm.
What we knew as Standalone Mode in previous ORDS releases still exists and is now initiated through a serve command. The embedded Eclipse Jetty web server and servlet container engine is still at the heart of this convenient way to have ORDS listen HTTP/HTTPS traffic. The significant change is how the configuration directory is determined at startup. There is no longer a requirement to modify the distributed ords.war to set the configuration directory location.
Deployment on a supported servlet container, such as Oracle WebLogic Server or Apache Tomcat, is similarly impacted by this principle that the distributed ords.war should not be modified. A new mechanism for specifying the configuration directory when the application is deployed is required. The ORDS documentation covers the standard approach for a typical setup, use a Java System Property ( -Dconfig.url=/ords_config/ ), startup your server(s) and deploy the web application.
For some developers that may not have a full production environment scaled out for high availability, or may be restricted in the Apache Tomcat or WebLogic Server deployment options, the approach to set different servers with their own ords.war and configuration is not always feasible. With previous ORDS releases they may have had the practice of deploying multiple ORDS web applications with different contexts and configurations. At first it would appear that the new direction in ORDS 22.1.0 makes that impossible but it is not. All that is required are a few extra steps.
Bake your own
For deploying to Apache Tomcat or WebLogic Server you can make your own, separate web application with the config.url baked in. Moreover, you can call it whatever makes sense in your context.
This script produces a new web application archive file with the jars from the distributed ords.war and web.xml deployment descriptor has the configuration directory path set. That way, wherever it is deployed, it will refer to the specified configuration directory.
Usage: create_deploy_war <source ords.war> <destination war filename> <configuration directory>
For example, if I want to have two ORDS instances deployed called tom and jerry with two separate configuration directories and I have extracted the ORDS 22.1.0 to /opt/oracle/ords-184.108.40.206.1723/. The jerry web application will be configured with a default pool pointing to an Oracle 19c database. The tom web application will be configured with a default pool pointing to an Oracle 21c database. We will show both web applications have these configurations by using the REST Enabled SQL service in ORDS to return database version information. Before that, the web application files must first be created…
Similar behaviour with Apache Tomcat with two different web applications with their own, separate, configuration.
What I have demonstrated is a script which produces a new web application for deployment in Apache Tomcat or WebLogic Server. The script is written for Python 2 because that’s the version distributed with Oracle WebLogic Server. It creates a new web application based on the distributed ords.war but without the META-INF content. That content includes a signature digest of the web.xml which will be invalid when the new deployment descriptor is written to the new WAR file.
This is what makes it possible to have multiple ORDS instances deployed to the same server all with different configuration directories.
RESTful services often involve JSON structures for request and response payloads. On occasion there is a requirement to integrate with another service or system that does not use JSON or expects a particular JSON structure. This can be catered for in an ORDS plsql/block based service and in practice may be the most performant approach to take for individual services. Imagine a scenario where existing services, perhaps from AutoREST enabling tables and views, you want a YAML representation option as well. ORDS provides a plugin framework which makes it possible to extend the functionality and introduce custom behaviour such as this.
Default content type: application/json
When an ORDS service is created, by rest enabling a table for example, the default content type for requests and response payloads is application/json. The following request to get data from the EMPLOYEES table in the HR schema…
The end goal for this exercise is to convert any application/json response to a text/yaml response without having to modify the service implementation. The client specifies that YAML should be returned by stating that text/yaml is acceptable…
To achieve this we will code our own custom plugin to ORDS. The ORDS distribution from oracle.com/rest contains example plugins and for simplicity one of these examples will be copied as the basis of our new custom plugin. First I’ll provide the code and steps to build the plugin. Then I will go through the plugin code to explain key parts.
Assumptions and prerequisites
This example is based on the plugin examples in the ORDS Getting Started tutorial. It is assumed that you are already familiar with that plugin-demo Servlet example.
We’re using ORDS 21.4.0 with Oracle Java 1.8 in standalone mode. It has already been setup with a configdir containing valid connection details to a database that has ORDS installed. There is a REST Enabled schema in the database with a REST Enabled table that does not require authentication to access. In my example the schema is HR, the table is EMPLOYEES and the RESTful endpoint is http://localhost:8080/ords/hr/employees.
ORDS 21.4.0 ships with Jackson 2.13.0 so any other jars that are required at compile or runtime will be related to that particular release. If you are using a later version of ORDS you should use the Jackson release that is applicable.
In this article I will refer to the new plugin as plugin-yaml but you can use any name that suits.
Copy the examples/plugins/plugins-demo directory to examples/plugins/plugin-yaml
Change the project name in the build.xml to plugin-yaml
<project default=”dist” name=”plugin-yaml”>
Remove the examples/plugins/plugin-yaml/src/example/PluginDemo.java
There are 3 jars to add to the ords.war as plugins. The plugin-yaml.jar which we have built from source and the 2 runtime dependencies jackson-dataform-yaml and snakeyaml. Any subsequent code changes in examples/plugins/plugin-yaml/src/ will require the ant project to be built again but only the produced plugin-yaml.jar must be added to the ords.war again.
Example output from running ant command:
[delete] Deleting directory /scratch/ords-220.127.116.118.1956/examples/plugins/plugin-yaml/built
[mkdir] Created dir: /scratch/ords-18.104.22.1688.1956/examples/plugins/plugin-yaml/built/classes
[javac] Compiling 1 source file to /scratch/ords-22.214.171.1248.1956/examples/plugins/plugin-yaml/built/classes
[javac] Note: Discovered type annotated with @Provides: example.PluginYaml
[jar] Building jar: /scratch/ords-126.96.36.1998.1956/examples/plugins/plugin-yaml/built/plugin-yaml.jar
Total time: 0 seconds
The PluginYaml.java is a basic javax.servlet.Filter implementation that replaces the response output stream when the servlet container is about to return that response to the calling client. To achieve this, when it is determined that the client provides Accepts: text/yaml in the request, the filter supplies a HttpServletResponseWrapper which captures the original response. That wrapper, in this case called ServletResponseWrapperCopier, uses Jackson mappers to produce a YAML representation of the JSON response content and return that instead.
The performance overhead of producing a response in one structure and creating a copy of it in a different structure may be quite significant for large payloads. Although ORDS provides a plugin framework for you to add functionality, your custom plugin, or any third party jars that you add to the ords.war are not supported by Oracle. The upshot is obvious. When it comes to plugins: test , test , test.
Now that ORDS 21.3.0 is available in all Oracle Autonomous Database regions there are even more options for copying data to, from and within Oracle Cloud. This release contains two important enhancements to the RESTful Data Pump services provided by the ORDS Database API
Any REST Enabled database account can access the Data Pump services. Of course, the database account in question still needs to have the necessary permissions in the database to perform the actions but before this release, only administrators could access the Database API Data Pump services.
Use Oracle Cloud Object Storage for export / import file destination / source. The databases must support the interaction with Object Storage though DBMS_CLOUD package. This is available out of the box with Oracle Autonomous Database.
Options? I got plenty
With these new capabilities, particularly with the Object Storage integration, moving to a hosted Oracle Autonomous Database is even easier. Not only that, moving data between existing Autonomous Databases is now easier too. To demonstrate, this article provides a step by step walk through of copying tables from a 19c Autonomous Database to a 21c Autonomous Database.
Just to reiterate the demonstration scenario: non-administrator database users will copy tables and data from hosted 19c database to a hosted 21c database with hosted object storage as the intermediary. That’s achieved through one ORDS Database API Data Pump request to export and one ORDS Database API Data Pump request to import.
Prepare to win!
To make this all happen so easily does require a little preparation of key components. Here’s what is required…
Two Autonomous databases. For this demonstration they are in the same region, but the Object Storage bucket could be configured to replicate to another region.
Object Storage bucket. A private bucket for file storage in your Oracle Cloud tenancy. The databases will have access to the bucket but will require credentials to do so.
Authorisation Token. These are revokable credentials that will be used by both the source and target databases to connect to the Object Storage bucket.
Database Account. In both databases a REST Enabled database user is required. It is these user credentials that will be used to make the REST request to ORDS. Also, it is these database users that will be performing the DBMS_DATAPUMP procedure calls. Therefore they will require certain privileges in the database. If they are not Administrator users they must have these privileges granted to them. More on that later.
Note that all of the above: Database, Object Storage and ORDS, are available with the Free Tier account on Oracle Cloud Infrastructure.
For this demonstration there are two Always Free databases in the Frankfurt region. The one named DB 201912101106 database is an Oracle 19c Enterprise Edition database. The one named DB 202109201918 is a the 21c database.
Object Storage Bucket
To keep files for this demonstration separate there is a dedicated Object Storage bucket which can be deleted afterwards. There’s plenty of space available in the Free Tier and the bucket is private but it is a good practice not leave exported data files around long.
The bucket could be called anything but for simplicity in this demonstration it is named: DataPumpBucket
So that the databases can interact with the private Object Storage bucket they need authentication credentials. This is achieved through creating an Auth Token and storing it as a credential in the databases. Tokens are created for a specific Oracle Cloud user and are managed in the User Settings section for the user. The quickest way there is to select User Settings from the profile menu option in the top right corner.
Once in the User Settings section select the Auth Tokens resources tab and click on the Generate Token button. In the popup dialog enter a name for the token. This name can be anything that helps describe the purpose of the token and makes it easier to identify the token later for deleting it if necessary. What really matters is the generated token value because that is effectively the secret password that will be stored in both the source and target databases. After entering a name, in this case it was DataPumpToken, press the Generate Token button to produce a token value. The value is hidden by default. Copy that value as it will not be retrievable once the dialog box is closed. That token value will be used latter when creating Database Credentials in both databases.
Database Account – Source and Target Schemas
The source schema will have tables with data that will be exported to the Object Storage bucket. That user must be created, granted the necessary privileges, REST Enabled, and have the tables created too. With the exception of creating tables, the same actions will be taken for the database account in the target database. In both cases the schema/user will be called DBTST. In the source database, the 19c one, the ADMIN user can create a REST Enabled user through Database Actions.
The same can be done by the ADMIN user in the 21c target database. Once created these users can invoke the Data Pump services in ORDS Database API but won’t achieve much as they do not have the necessary privileges. In both databases the ADMIN user must run the following to grant these privileges.
-- REQUIRED FOR DATA PUMP TO OBJECT STORAGE
GRANT EXECUTE ON DBMS_CLOUD TO DBTST;
-- REQUIRED FOR DATA PUMP LOGS
GRANT READ, WRITE on DIRECTORY DATA_PUMP_DIR to DBTST;
-- THESE EXPLICIT GRANTS ARE REQUIRED FOR DATA PUMP
GRANT CREATE ROLE TO DBTST ;
GRANT CREATE VIEW TO DBTST ;
GRANT CREATE SESSION TO DBTST ;
GRANT CREATE TABLE TO DBTST ;
GRANT CREATE SEQUENCE TO DBTST ;
GRANT UNLIMITED TABLESPACE TO DBTST ;
GRANT CREATE PROCEDURE TO DBTST ;
Note that the above is not necessary for users with the DBA role as those users will have these privileges already.
Now we get into defining the credentials that both source and target database accounts will use. Although Data Pump can work with global credentials the ORDS Database API currently restricts the use of credentials to just those that the user has defined themselves. In other words, the credential must found by name in USER_CREDENTIALS view and ENABLED. Login to both the source and target database accounts and run this command, using the token value copied from the Auth Token step and the corresponding username…
In the above example the credential name given is DATAPUMP_CRED_NAME. It could be called anything. Whatever name is given, this is the name that is used in the request payload to initiate an export or import job later.
The purpose of this exercise is to show data being copied from one source ( 19c Autonomous Database ) to another ( 21c Autonomous Database ) so let’s create some tables and put some data in them. For the source database account run this dpexp_setup.sql script.
Setup a REST client
Any client for sending HTTP requests will do and curl is a common command line utility that is used in many examples. Although ubiquitous curl does have a few drawbacks for REST calls. In particular that it does not format the response structure well. In this demonstration we’ll use the Postman app to make the requests. For convenience we will import the OpenAPI document that is generated by ORDS. The URL for the openapi.json document will be different for both the source and target REST Enabled schemas and we can take advantage of that to have separate API collections.
To get started, import the API document for the source schema and set the Basic Authentication credentials that will be inherited by all the requests in the collection. Rename the collection to make it clearer what it is for.
Make your move
Now it’s time to do what we came here for…
Export from the source Autonomous Database
See the files in Object Storage
Import to the target Autonomous Database
See the data in the new tables
For this demonstration we’ll export all tables, except for JOB_HISTORY, in the DBTST schema to the bucket called DataPumpBucket. The URL for bucket and the Data Pump filename pattern will be used. You can derive the URL from the OCI console Bucket details page which ends with /DataPumpBucket/objects. Replace the https://console. prefix with https://objectstorage. Then replace the DataPumpBucket/objects suffix with DataPumpBucket/o/ and then add the filename pattern. In this case we’ll go with EXPDAT%U-DBTST.DMP. Oracle Data Pump will create numbered files with that pattern in the bucket.
Now that the Data Pump Job is created its status and log can be retrieved. It’s easy to create a new REST request in Postman just by clicking on the link in the response body. Save the request definition so that the previously defined Basic Authentication can be inherited.
Review Object Storage
Refresh the bucket details page to see that Oracle Data Pump has created files there.
Now for getting the data imported to the target database. As before, the Postman app has the Database API document imported as a collection so that the URL and basic authorisation details are defined in one place. Using the Database API we can confirm that this DBTST schema does not have any tables.
Next, submit the import request. The credential_name and file_name parameters are the same as the export but operation parameter value should now be IMPORT.
Now review the Data Pump import job status and confirm that the tables have been created.
Now that the tables are there, Database Actions can be used to confirm they have data too.
What has been demonstrated here is just one of the many options now available getting data to, from or between Autonomous Databases. There were a few perquisite steps but once they are in place, exporting and importing can be repeated with just one request each.
Although an on-premise database may not have the DBMS_CLOUD integration for accessing Oracle Cloud Object Storage, the ability for non-administrator users to import and export to their database directory is now available through ORDS Database API.
The demonstration showed exporting and importing tables but don’t forget that through the ORDS Database API Data Pump services you can do much more. Export the full databases, remap a schema on import, and much more. Over to you to take this further in your environment.