Dontcheff

Archive for the ‘Autonomous’ Category

Using the PL/SQL Profiler in the Oracle Autonomous Database

In Autonomous, DBA, Developer, Oracle database, PL/SQL on November 13, 2024 at 18:45

The PL/SQL (Hierarchical) Profiler has been part of the Oracle database for quite some time but there is very little, close to none, information on how to use it in the Oracle Autonomous Database.

This blog post is about how to bypass few hurdles in the Autonomous database for we lack OS access in ADB-S.

A good starting point on how to do PL/SQL Tuning with PL/SQL Hierarchical Profiler is Ulrike Schwinn‘s blog post.

There are 2 main packages: DBMS_PROFILER and DBMS_HPROF introduced in 8i and 11gR1, respectively. Tim Hall explains in detail how to configure and use both DBMS_PROFILER and DBMS_HPROF.

For the PL/SQL Hierarchical Profiler (DBMS_HPROF), you need to specify a directory/location and a file name where the profiler output file will be placed.  Then, in order to generate formatted and readable HTML reports from the raw profiler output file, you need to run plshprof (placed in the directory $ORACLE_HOME/bin/). Now, this is rather tricky in ADB as we do not have direct OS access.

As we cannot create physical directories on the server, here is the workaround: let us use (for example) DATA_PUMP_DIR (there are few options from DBA_DIRECTORIES) and run the profiler on a procedure called MONGODB_TO_ORACLE (it is a real procedure migrating MongoDB to Oracle if you are wondering this 🙂 ):

execute dbms_hprof.start_profiling(LOCATION=>'DATA_PUMP_DIR',FILENAME=>'profiler.txt');
execute MONGODB_TO_ORACLE;
execute dbms_hprof.stop_profiling;

Afterwards, when the profiling completes, we can run the ANALYZE function to analyze the raw data and place it in the (hierarchical) profiler tables. Then, we can check what files were produced (note the profiler.html file!):

DECLARE
  l_runid  NUMBER;
BEGIN
  l_runid := DBMS_HPROF.analyze (
               location    => 'DATA_PUMP_DIR',
               filename    => 'profiler.txt',
               run_comment => 'MongoDB_to_Oracle_migration');                   
  DBMS_OUTPUT.put_line('l_runid=' || l_runid);
END;
/
SELECT * FROM DBMS_CLOUD.LIST_FILES('DATA_PUMP_DIR');

To access the log file you need to move the log file to your Cloud Object Storage using the procedure DBMS_CLOUD.PUT_OBJECT. Note that PUT_OBJECT is overloaded. That is, in one form the procedure PUT_OBJECT copies a file from ADB to Cloud Object Storage. In another form, the procedure PUT_OBJECT copies a BLOB from Autonomous Database to the Cloud Object Storage. For example, the following PL/SQL block moves the file profiler.html to your Cloud Object Storage:

BEGIN
  DBMS_CLOUD.PUT_OBJECT(
    credential_name => 'DEF_CRED_NAME',
    object_uri => 'https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/juliandon/b/adbkofa/o/profiler.html',
    directory_name  => 'DATA_PUMP_DIR',
    file_name => 'profiler.html);
END;
/

Note that if you run DBMS_CLOUD.PUT_OBJECT with a user other than ADMIN you need to grant read privileges on the directory to the other user. 

In case you would prefer using the old profiler (DBMS_PROFILER) in ADB, then you can get from another Oracle installation the two scripts proftab.sql and profiler.sql and copy them locally on you computer. The proftab script creates 3 tables and some sequences while the profiler.sql creates the html report. Edit the profiler.sql script to spool the output in a local directory (search for SPO).

You can get the profiler.sql also from MOS: Script to produce HTML report with top consumers out of PL/SQL Profiler DBMS_PROFILER data (Doc ID 243755.1). In case you do not have access to another installation, here are create table commands:

create table plsql_profiler_runs
(
  runid           number primary key,  -- unique run identifier,
                                       -- from plsql_profiler_runnumber
  related_run     number,              -- runid of related run (for client/
                                       --     server correlation)
  run_owner       varchar2(128),       -- user who started run
  run_date        date,                -- start time of run
  run_comment     varchar2(2047),      -- user provided comment for this run
  run_total_time  number,              -- elapsed time for this run
  run_system_info varchar2(2047),      -- currently unused
  run_comment1    varchar2(2047),      -- additional comment
  spare1          varchar2(256)        -- unused
);

comment on table plsql_profiler_runs is
        'Run-specific information for the PL/SQL profiler';

create table plsql_profiler_units
(
  runid              number references plsql_profiler_runs,
  unit_number        number,           -- internally generated library unit #
  unit_type          varchar2(128),    -- library unit type
  unit_owner         varchar2(128),    -- library unit owner name
  unit_name          varchar2(128),    -- library unit name
  -- timestamp on library unit, can be used to detect changes to
  -- unit between runs
  unit_timestamp     date,
  total_time         number DEFAULT 0 NOT NULL,
  spare1             number,           -- unused
  spare2             number,           -- unused
  --
  primary key (runid, unit_number)
);

comment on table plsql_profiler_units is
        'Information about each library unit in a run';

create table plsql_profiler_data
(
  runid           number,           -- unique (generated) run identifier
  unit_number     number,           -- internally generated library unit #
  line#           number not null,  -- line number in unit
  total_occur     number,           -- number of times line was executed
  total_time      number,           -- total time spent executing line
  min_time        number,           -- minimum execution time for this line
  max_time        number,           -- maximum execution time for this line
  spare1          number,           -- unused
  spare2          number,           -- unused
  spare3          number,           -- unused
  spare4          number,           -- unused
  --
  primary key (runid, unit_number, line#),
  foreign key (runid, unit_number) references plsql_profiler_units
);

comment on table plsql_profiler_data is
        'Accumulated data from all profiler runs';

create sequence plsql_profiler_runnumber start with 1 nocache;

That’s it:

SQL> start profiler.sql 1

 RUNID RUN_OWNER                 RUN_DATE        RUN_COMMENT                   
------ ------------------------- --------------- ------------------------------
     1 ADMIN                     07-NOV-24 10:17 07-NOV-24                     

Parameter 1:
RUNID (required)

Value passed:
~~~~~~~~~~~~
RUNID: "1"

PROFILER file has been created:
profiler_NWUWD5H8UG_POD_23.0.0.0.0_20241107_102240.html.

 RUNID RUN_OWNER                 RUN_DATE        RUN_COMMENT                   
------ ------------------------- --------------- ------------------------------
     1 ADMIN                     07-NOV-24 10:17 07-NOV-24                     

Performance tuning of queries over JSON data in Oracle Database 23ai

In 23ai, Autonomous, Database tuning, DBA, JSON, MongoDB API on September 2, 2024 at 09:19

Oracle Database 23ai offers 4 main ways of tuning SQL queries over JSON data: (1) indexes, (2) materialized views, (3) in-memory column storage, and (4) Exadata storage-cell pushdown. This blog post looks into how to index JSON.

It is not a surprise to anyone that the Oracle Autonomous Database makes it extremely easy to index JSON data:

Prior to Oracle Database 21c, we were not able to create a multivalue index for fields which may occur multiple times in a document because they are contained in objects within an array (objects as elements or at lower levels within elements). Back then, we sometimes created a materialized view that extracts the data you want to index, and then create a function-based index on that view data.

With the new Oracle Database 23ai, things have changed and now we can create multivalue indexes: either using SQL or via the MongoDB API.

Let us consider the following collection of different coffee types, their prices, sizes, quantity, and note that couple of them (Espresso and Finlandia) have an additional field with some extra details.

Here is how the data looks from MongoDB Compass:

And here is the insert statement (using mongosh) for creating the same data set.

Let us create (using SQL) a multivalue index for the values of the field “Code”:

CREATE MULTIVALUE INDEX MVI_Code ON coffee c (c.data.CoffeeItems.Details.Code.numberOnly());

Let us verify that the index is actually created and also being utilized using mongosh:

Using index hints is possible with a MongoDB hint that names the index. Note that Oracle does not support the use of MongoDB index specifications — just provide the index name.

db.COFFEE.find({"CoffeeItems.Details.Code":35801}).hint("MVI_CODE");

Verify with:

db.COFFEE.find({"CoffeeItems.Details.Code":35801}).hint("MVI_CODE").explain();

Finally, you can specify an Oracle SQL hint by using the query-by-example (QBE) operator $native which is an Oracle extension to the MongoDB hint syntax. You can pass any SQL hint using $native. In particular, you can turn on monitoring for the current SQL statement using hint MONITOR. This code does that for a query looking for any coffee with a price of 10:

Check here for more details on how to index JSON data.

Disaster Recovery for the Oracle Autonomous JSON Database

In Autonomous, Cloud, DBA, JSON, Oracle database on March 12, 2024 at 17:13

“You can’t prepare for everything – but you can prepare for anything.” – Paul Youngjohns

Backup-Based Disaster Recovery is a low-cost DR option for databases with higher RTO tolerance.

As Autonomous Data Guard is currently not supported for Autonomous JSON Database workloads, the natural questions is what are the alternatives:

Option 1. The backup-based Disaster Recovery solution is one alternative. Backup-based DR uses database backups to instantiate a peer database at the time of switchover or failover. This enables you to have a lower cost and higher Recovery Time Objective (RTO) disaster recovery option for your Autonomous Database, as compared with Autonomous Data Guard.

Disaster recovery for AJD provides a peer database instance in a different availability domain (or different Exadata if there is only 1 AD in the region) or in a different region around the world. With a peer database, if the primary database becomes unavailable, disaster recovery switches to role of the peer database to primary and begins recreating a new peer database. For backup-based disaster recovery with a cross-region peer, backups are copied to the remote region. 

For local backup-based disaster recovery, existing local backups are utilized. You can edit the automatic backup retention period and the long-term backup schedule. Check that the backup state is “Active” and when last automatic backup went through. Your local peer will be in a different Availability Domain (AD) than the primary database in regions with multiple ADs, or a different Exadata machine in regions with only one AD.

There is no additional cost for local Backup-Based Disaster Recovery! And AJD already takes local backups automatically for you, so there is no additional cost to enable a local backup copy.

Backup-Based Disaster Recovery RTO and RPO numbers are:

Backup-Based Disaster Recovery ConfigurationRTORPO
Local backup copyone (1) hour + 1 hour per 5 TB10 seconds
Cross-region (remote) backup copyone (1) hour + 1 hour per 5 TB1 min

When you have a local peer and the switchover is not successful, the Oracle Cloud Infrastructure console shows a banner with information about why the switchover was not successful and the Oracle Cloud Infrastructure console shows a failover link in the Role field that you can click to initiate a failover to the local peer. The failover link only shows when the Primary database is unavailable and a peer is available. That is, the Primary database Lifecycle State field shows Unavailable and the local peer is available.

You may have also one additional backup copy, in another region. Here is one I added to the Swiss region:

I also enabled cross region backup replication from Germany to Switzerland. Check the 2 informational boxes below (cost and replication of backups):

By default, automatic backups are created and maintained at the current Primary database and are not replicated to a cross-region peer. Optionally, you can enable replication of the automatic backups to the cross region peer (as I have done above). A cross-region Backup-Based Disaster Recovery peer can be converted to a snapshot standby. This converts the peer to a read-write database for up to two days.

Note that Backup-Based Disaster Recovery is not available with Always Free Autonomous Database.

Option 2. For having a copy of the Autonomous JSON Database in a different region (and not just in another AD), an option to consider is Refreshable Clones.

When you create a refreshable clone for an Autonomous Database instance the system clones the source database to the refreshable clone. After you create a refreshable clone you can refresh the clone with changes from the source database.

As you can see my refreshable clone is in Switzerland (while the source JSON Database is in Germany):

When you disconnect a refreshable clone the refreshable clone is disassociated from the source database. This converts the database from a refreshable clone to a regular database. Following the disconnect operation you are allowed to reconnect the disconnected database to the source database. The reconnect operation is limited to a 24 hour period.

Refreshable clones are billed based on their base ECPU count and any additional ECPU usage if compute auto scaling is enabled; they do not get billed additionally for the ECPUs of the source database. A refreshable clone in a different region than its source database is billed for twice the amount of storage that the source database is billed for.

You can check the main features of refreshable clones but it is most important to know that refreshable clones have a one week refresh age limit. If you do not perform a refresh within a week, then the refreshable clone is no longer refreshable. After a refreshable clone passes the refresh time limit, you can use the instance as a read only database or you can disconnect from the source to make the database a read/write (standard) database.

Note the important limitations on refreshable clones but these are the main ones:

  • Always Free Autonomous Databases do not support refreshable clones
  • You cannot create a cascading series of refreshable clones
  • You cannot backup or restore a refreshable clone

For the Oracle Autonomous JSON Database, note the following when reconnecting to the source database:

  • If, after you disconnect the refreshable clone, you promote both the clone and the source to Oracle Autonomous Transaction Processing (workload type Transaction Processing), you can reconnect the database to the source.
  • If after you disconnect the refreshable clone, you promote the source database to Oracle Autonomous Transaction Processing (workload type Transaction Processing) and do not promote the disconnected clone, the disconnected clone must also be promoted to Oracle Autonomous Transaction Processing (workload type Transaction Processing) before you perform the reconnect operation.
  • If after you disconnect the refreshable clone, you promote the disconnected database to Oracle Autonomous Transaction Processing (workload type Transaction Processing), you can still reconnect to the source but the reconnected database remains in the promoted state.

Option 3. Oracle GoldenGate is another way to replicate your data do another region. You can add a replicat for Autonomous JSON Database. It is even possible to use Oracle GoldenGate to replicate MongoDB to AJD, good for use case of migrating out of MongoDB to Oracle.

Performance Tuning in the Oracle Autonomous Database: Creating Indexes for JSON Collections

In Autonomous, Cloud, Database tuning, DBA, JSON, Oracle database, Performance on February 14, 2024 at 20:22

The Oracle Database and in particular the Oracle Autonomous Database offers multiple technologies to accelerate queries and improve database performance over JSON data, including indexes, materialized views, in-memory column storage, and Exadata storage-cell pushdown.

Simple Oracle Document Access (SODA) is a NoSQL query language for storing, editing and deleting JSON documents. A SODA collection is a set of documents (in particular JSON) that is backed by an Oracle Database table or view. By default, creating a SODA document collection creates the following in the Oracle Database:

  • Persistent default collection metadata.
  • table for storing the collection, in the database schema to which your SODA client is connected.

If the init.ora parameter compatible is at least 20, then SODA uses JSON data type by default for JSON content, and the version method is UUID. If the init.ora parameter compatible is less than 20, then SODA uses BLOB textual data by default for JSON content and the data is character data. 

Collection metadata is composed of multiple components. The kind of database you use determines the collection metadata that is used by default, and which of its field values you can modify for custom metadata.

Let us consider a collection of movies within the ADMIN schema in a 19c Oracle Autonomous JSON Database (think of one JSON file per movie). After opening JSON from Database Actions, we will create an index on the MOVIES colection:

Creating an index for a JSON collection requires the name and the type. The different options are Functional, Spatial and Search.

Before creating a functional index on the runtime field (we will be running order-by queries against it), let us see what are the field needed (fields to enter):

  • For a functional type index, the fields to enter are:
    • Unique: make all indexed values unique
    • Index Nulls: use the index in order-by queries
    • Path Required: the path must select a scalar value, even a JSON null value
    • Properties: select the property that you want to index on, or easier is to just type * to display all available document properties in the collection
    • Composite Index: use more than one property
    • Advanced: change the storage properties of the indexed property
  • For search index, the options are:
    • Dataguide off-on: create JSON data guide for collection
    • Text Search off-on: index all properties in documents to support full-text search based on string equality
    • Range Search off-on: choose on to support range search when string-range search or temporal search (equality or range) is required
  • Spatial index is used to index GeoJSON geographic data. The selected property should be of GeoJSON type. See Using GeoJSON Geographic Data. For spatial index, the options are:
    • Path Required: the path must select a value, even if it is a JSON null value
    • Lax: the targeted field does not need to be present or does not have a GeoJSON geometry object as its value

The properties of the selected index appear in JSON format below the listed indexes:

We can also index a singleton scalar field using SQL, here is how:

CREATE INDEX YEAR_NDX ON movies (json_value(data, '$.year.number()' ERROR ON ERROR));

Item method numberOnly() is used in the path expression that identifies the field to index, to ensure that the field value is numeric. As I have the year field in one of the JSON files as a string, I am getting the following error if I use the method numberOnly(): ORA-01722: invalid number. Method numberOnly() is used instead of method number(), because number() allows also for conversion of non-numeric fields to numbers. Clearly as some moview are produced during the same year I cannot make the index unique: ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found.

Creating a collection relational views of JSON documents is simple, select the columns we need, say title, year and runtime:

In SQL, we can use the view and order the movies by runtime:

Regardless of your database release you can create whatever Oracle Database indexes you need directly, using:

(1) the JSON Page of Using Oracle Database Actions, check Creating Indexes for JSON Collections

(2) Simple Oracle Document Access (SODA)

(3) SQL — see Indexes for JSON Data in Oracle Database JSON Developer’s Guide.

Using the JSON page in ADB is perhaps the easiest approach to indexing JSON data.

The static data dictionary views USER_SODA_COLLECTIONS lists the basic features of all of your SODA collections. Correspondently, you have also DBA_SODA_COLLECTIONS and ALL_SODA_COLLECTIONS. Collection metadata, expressed in JavaScript Object Notation (JSON) can be obtained from the last column, called JSON_DESCRIPTOR.

Users will typically work with JSON collections using native language drivers, for example, SODA for Java or SODA for Python. SODA native language drivers generally provide more throughput (operations per second) than the REST driver (SODA for REST). It is recommended to configure the SODA drivers as follows:

  • Enable SODA Metadata Cache: The SODA driver needs to know the metadata of each JSON collection (the column names, types, etc.). By enabling the metadata cache, roundtrips to the database can be saved, improving latency and throughput.
  • Enable Statement Cache: Statement caching improves performance by caching executable statements that are used repeatedly, such as in a loop or in a method that is called repeatedly. For Java, the statement cache is enabled using JDBC.
  • For load-balanced systems: turn off DNS caching: Load balancing allows to distribute SODA operations across different nodes. If DNS caching is turned on, then all connections are likely to use the same node and nullifying the load balancing. For Java, the following system property should be set: inet.addr.ttl=0

The database performance tuning techniques also apply to SODA: for example, SODA collections can be partitioned or sharded, and queries can be accelerated using indexes and/or materialized views.

How to monitor all that? You can turn on performance monitoring of the SQL operations that underlie a SODA read or write operation, by adding a SQL hint to the SODA operation.

Use only hint MONITOR (turn on monitoring) or NO_MONITOR (turn off monitoring). You can use this to pass any SQL hints, but MONITOR and NO_MONITOR are the useful ones for SODA, and an inappropriate hint can cause the optimizer to produce a suboptimal query plan.

Note in addition that when using SODA with Autonomous Database the following restrictions apply:

  • Automatic indexing is not supported for SQL and PL/SQL code that uses the SQL/JSON function json_exists. See SQL/JSON Condition JSON_EXISTS for more information but note that all the collection APIs use JSON_EXISTS for filtering so … the benefits might be negligible.
  • Automatic indexing is not supported for SODA query-by-example (QBE)

Final note: since Auto Indexing is disabled by default in the Autonomous Database, you can enable it by running: EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');

The 5 major benefits of ECPUs in the Oracle Autonomous Database

In Autonomous, Cloud, Consolidation, DBA, JSON on January 25, 2024 at 08:03

Oracle Autonomous Database (ADB) recently introduced a more advanced ECPU billing metric and is now retiring the legacy OCPU billing metric for Autonomous Data Warehouse and Autonomous Transaction Processing in the next 12 months. Oracle recommend switching from the OCPU billing metric to the ECPU billing metric, which will not incur any downtime or service interruptions.

ECPUs will provide the same great price-performance as OCPUs and continuous improvements in price-performance over time. Updating to the ECPU billing metric provides the following benefits:

  • 50% lower entry cost: The smallest Autonomous Database that can be provisioned with ECPUs is 50% less expensive ($0.672 per hour vs $1.3441 per hour with OCPUs)
  • Finer granularity for database scaling: Each incremental increase in ECPU database size is only $0.336
  • Lower storage costs: Autonomous Data Warehouse storage price reduced from $118.40 to $25.00 per TB per month and Autonomous Transaction Processing storage can be provisioned in increments of 1GB (this is a huge thing!), with a minimum of 20GB – this brings the ADW in-database storage price on par with the object storage and thus this helps to build data lakes solely on the architectural requirements and not focusing on cost
  • Up to 87% lower costs with database consolidation: Elastic Resource Pools, available on ECPU ADB Serverless databases, help consolidate deployments leading to major cost savings
  • New features for Autonomous Database may only be available with ECPU’s

Note that the prices mentioned above are the current list prices for Autonomous Databases with the License Included license type. Please, refer to the ECPU billing metric FAQ for more details about ECPUs and each of these benefits.

There are also differences in backups between OCPU’s and ECPU’s. ECPU’s backup storage is billed separately, and the backup retention period may be selected between 1 and 60 days. With OCPU’s, 60-days of backup storage is included in the storage price. This new ECPU customer-controllable backup is beneficial because customers can now control the backup storage size and further reduce the cost of dev/test environments. Here is how I reduced the size from 60 to 31 days (later on I did reduce it to 7 days).

I did scale down my database in 2 phases: (1) I switched to the ECPU model (1 OCPU –> 4 ECPUs) and then (2) reduced the ECPU count from 4 to 2 and the storage from 1024GB to 20GB (those two in one go with no downtime).

Here are some general guidelines related to the new ECPU metric:

  1. Provision all new Autonomous Data Warehouse and Autonomous Transaction Processing databases or clones with the ECPU billing metric
  2. Update all existing databases to the ECPU billing metric, which is a simple and seamless button click or API call
  3. Note that if you choose not to update your existing databases’ billing metric at this time, Oracle may convert your databases from the OCPU billing metric to the ECPU billing metric in the future

Updating your Autonomous Database Serverless to the ECPU billing metric will have no impact to your service and incur no downtime. Oracle Autonomous Database will be retiring the OCPU-based SKUs and replacing them with the ECPU-based SKUs. Starting in August 2023, some new Autonomous Database features may be available only on ECPU’s. For example, Elastic Resource Pools are only available with ECPU’s.

Note that ECPU’s have also already been introduced for MySQL Heatwave on AWS, and other services may also offer ECPU’s in the future.

Here is some additional information regarding the ECPU announcement:

3 small tricks for DBAs in Oracle Database 23c

In Autonomous, DBA, New features, Oracle database on June 26, 2023 at 08:46

Although the new Oracle Database 23c Free is in fact a Developer Release, there are still some interesting and useful DBA features.

Here are 3 that I find simple and good to know:

1. Control PDB Open Order:

DBAs can define a startup order or priority for each pluggable database (PDB) where the most important PDBs are started first. The priority is applied to PDB opening order and upgrade order as follows:

  • Restoring PDB states when opening the CDB
  • Setting PDB states when using the PDB OPEN ALL statement
  • Setting the order for PDB database upgrade operations
  • Starting PDBs in an Active Data Guard (ADG) switchover or failover

This feature allows critical PDBs to start and open before less important PDBs, reducing the time for the critical applications to become usable.

Note: if several pluggable databases have the same priority, then Oracle will open them based on CON_ID (lowest first).

2. Read only users:

DBAs can make a user to become READ-ONLY!

You can set the access of a local user to a PDB to READ ONLY or READ WRITE with the ALTER USER statement.

3. Increased Maximum Password Length:

The times of system/manager, sys/change_on_install and connect internal are clearly gone. In 23c, Oracle Database supports passwords up to 1024 bytes in length while in previous releases, the Oracle Database password length and the secure role password length could be up to 30 bytes.

The increased maximum password length to 1024 bytes provides the following benefits:

  • It accommodates passwords that are used by Oracle Identity Cloud Service (IDCS) and Identity Access Management (IAM)
  • The increase to 1024 bytes enables uniform password rules for all Cloud deployments
  • The 30-byte limitation was too restrictive when password multi-byte characters used more than 1 byte in an NLS configuration

Really? 18446744073709551549 days 🙂 ? How many years would that be?

Obviously long time ago 🙂

Here is something more: the In-Memory column store will now (in 23c) automatically grow and shrink dynamically based on workload allowing IMCS to be available on ADB. With Automatic IM sizing, there is no longer a need to manually resize the In-Memory column store to accommodate different database workloads thus reducing the administrative effort of enabling Database In-Memory. Automatic In-Memory sizing also allows the In-Memory column store to be enabled on Autonomous Database, enabling applications running on ADB to also take advantage of faster analytic query performance.

This feature can be turned off with the underscore parameter _enable_space_preallocation:  

ALTER SYSTEM SET “_enable_space_preallocation” = 0;

The feature can be turned on again any time by setting “_enable_space_preallocation”=3 which is the default value:  

ALTER SYSTEM SET “_enable_space_preallocation” = 3;

The parameter is dynamic so it can be modified online and accepts 4 diffenent values:

0 turns off the tablespace pre-extension feature
1 enables tablespace extension
2 enables segment growth
4 enables chunk allocation

Also, Exadata scan performance in 23c is further improved for objects that are partially populated.

DBA_OPERATOR_ACCESS and SYSDATE_AT_DBTIMEZONE in ADB-S

In Autonomous, DBA, OCI, Oracle database on October 13, 2022 at 08:36

In a blog post in 2020, entitled SYSDATE and Time Zones in the Autonomous Database, I covered the sysdate/systimestamp issue in ADB-S. Basically, you are allowed to change the database and session timezones in ADB, but this doesn’t change the SYSDATE and SYSTIMESTAMP in the timezones. So, the PL/SQL packages, procedure and functions and in particular all SQL using SYSDATE and SYSTIMESTAMP might not return what you expect.

But now, there is a parameter called SYSDATE_AT_DBTIMEZONE available now on system level. Depending on the value of SYSDATE_AT_DBTIMEZONE, you see either the date and time based on the default Autonomous Database time zone, Coordinated Universal Time ‎(UTC)‎, or based on the time zone that you set in your database.

Here is how it works. Let us first check the database timezone:

The value of SYSDATE_AT_DBTIMEZONE is the default, FALSE:

With the default value of FALSE, I see GMT time:

If I change from FALSE to TRUE, then I see database TZ time:

If you decide to change the TZ, then you must restart the Autonomous Database instance for the change to take effect.

So, when SYSDATE_AT_DBTIMEZONE is FALSE in a session, calls to SYSDATE and SYSTIMESTAMP return values based on the default Autonomous Database time zone, Coordinated Universal Time ‎(UTC)‎. When SYSDATE_AT_DBTIMEZONE is TRUE in a session, calls to SYSDATE or SYSTIMESTAMP return the date and time based on the database time zone.

In case you need your application to show the database timezone (or a certain TZ) when calling SYSDATE or SYSTIMESTAMP, then change this new parameter to TRUE, set the correct TZ, if needed, and restart!

There is also a new view in ADB-S called DBA_OPERATOR_ACCESS. This view stores information on the actions that OCI cloud operations performs on your Autonomous Database. This view will not show results if Oracle Cloud Infrastructure cloud operations hasn’t performed any actions or run any statements in your Autonomous Database instance.

The DBA_OPERATOR_ACCESS view provides information starting on October 4, 2022, the date this feature was introduced. You cannot see anything done before October 4, 2022.

The view is based on the PDB_SYNC$ table:

The view contains the following 4 columns:

1. SQL_TEXT: SQL text of the statement executed by the operator

2. EVENT_TIMESTAMP: Timestamp of the operator action in UTC

3. REQUEST_ID: Request number related to the reason behind the operator action. This could be a bug number, an SR number, or a change ticket request number that provides information on the reason for the action

4. REASON: Reason for the operator action. This provides context for the reason behind the action and may have a value such as: MITIGATION, DIAGNOSTIC COLLECTION, or CUSTOMER REQUEST

So, the DBA_OPERATOR_ACCESS view provides good and useful information on the top level SQL statements that OCI cloud operations performs.

DBMS_AUTO_PARTITION: Automatic Partitioning in the Autonomous Database

In Autonomous, Cloud, Databases, DBA on September 26, 2022 at 06:33

Choosing a partitioning strategy is often far from straightforward. Range and interval partitioning are useful when organizing similar data types, most true for date and time data. Hash partitioning is useful for randomly distributing data across partitions based on a hashing algorithm, rather than grouping similar data. I remember how we managed to boost performance in a huge Siebel database by hash partitioning the big indexes. List partitioning is useful to explicitly map rows to partitions based on a set of values – for example the states in the USA.

But this is a tedious process and when doing it, regardless if you are a DBA or a Developer, the functional knowledge of the application and the SQL using the tables and indexes, is of paramount importance.

When using the Oracle autonomous database, we rather prefer things to be automated – goes without saying. So, here comes in ADB a handy package called DBMS_AUTO_PARTITION.

First, we need to configure auto partitioning with the option we need. We use the parameter IMPLEMENT (see the screenshot below) but if you want to enable recommendations and not implement those recommendations automatically, then use the parameter REPORT_ONLY instead (which is the default value).

You can always disable it using the OFF parameter.

Automatic partitioning uses a single-column partition key combined with single-level partitioning. Automatic partitioning does not support more complex partitioning strategies such as multi-column partitioned tables or composite partitioning.

Automatic partitioning chooses from the following 3 partition methods:

  • AUTOMATIC INTERVAL: This choice is best suited for ranges of partition key values
  • LIST AUTOMATIC: This partitioning method applies to distinct partition key values
  • HASH: Applies partitioning on the partition key’s hash values

Next, you can use the AUTO_PARTITION_SCHEMA and AUTO_PARTITION_TABLE settings to specify schemas and tables considered (or not considered) for automatic partitioning. Note that when automatic partitioning is set then all schemas and tables (in user-managed schemas) are considered for automatic partitioning if both the inclusion and exclusion lists are empty.

Here is an example where I prefer to manually decide on the partitioning strategy in the JULIAN schema – so I want to exclude it from auto partitioning:

If I wanted to do the opposite, that is add only the schema JULIAN to the auto partitioning list, I should have replaced FALSE with TRUE in the screenshot above. As soon as the inclusion list is no longer empty, only schemas in the inclusion list are considered.

The analysis and verification of automatic partitioning using RECOMMEND_PARTITION_METHOD is potentially a resource-intensive and long running operation, so avoid doing it in a production database. I would rather use a cloned database for automatic partitioning. The recommendations can be implemented with the APPLY_RECOMMENDATION procedure.

It is good to know that the Autonomous Database automatically collects workload information over time in an internally managed SQL workload repository maintained in the SQL Tuning Set (SYS_AUTO_STS). Recommendations of automatic partitioning generated by the RECOMMEND_PARTITION_METHOD function have a time limit, specified by the TIME_LIMIT parameter, with a default of 1 day. If you are analyzing a large system with many candidate tables, a single invocation may not generate a recommendation for all tables and you will have to invoke the procedure repeatedly to generate recommendations for additional tables. 

Here are the data dictionary views for automatic partitioning: DBA_AUTO_PARTITION_CONFIG and DBA_AUTO_PARTITION_RECOMMENDATIONS.

It is important to understand that unlike automatic indexing, automatic partitioning does not run periodically as a background task. Automatic partitioning only runs when you invoke it by using the DBMS_AUTO_PARTITION.RECOMMEND_PARTITION_METHOD function.

Not all tables are candidates for auto partitioning. The function VALIDATE_CANDIDATE_TABLE checks if a given table is a valid candidate for auto partitioning in Autonomous Database. There are several conditions for a table to be a valid candidate, here are few:

  • Table passes inclusion and exclusion tests specified by AUTO_PARTITION_SCHEMA and AUTO_PARTITION_TABLE configuration parameters
  • Table has up-to-date statistics
  • Table is at least 64 GB
  • Table has 5 or more queries in the SQL tuning set that scanned the table
  • Table does not contain a LONG data type column.

As you can see below, my SALES table is not a candidate as the statistics are stale:

Last, here are few underscore/hidden parameters related to auto partitioning (from a 21c database) :

_autoptn_costing: DBMS_AUTO_PARTITION is compiling query for cost estimates
_autoptn_flags: DBMS_AUTO_PARTITION flags
_autoptn_translate_table_name: DBMS_AUTO_PARTITION table name translation
_autoptn_workload_id: DBMS_AUTO_PARTITION workload id

The package is though missing in 21c:

SQL> desc DBMS_AUTO_PARTITION
ERROR:
ORA-04043: object DBMS_AUTO_PARTITION does not exist

If you are interested in auto partitioning also the application, then I would recommend a rather scientific paper called Automatic Partitioning of Database Applications. But is built onto the assumption that stored procedures have several disadvantages and not everyone would agree with that.

Automatic Segment Advisor in the Oracle Database

In Autonomous, Database tuning, DBA, Oracle database, Oracle internals on July 1, 2022 at 06:50

“From my experience, the best advisors help in three ways: encourage you to look at the problem or opportunity from multiple angles; help you balance the tug of the short-term with important long-term priorities; and ask the tough questions you need to know to reach the best solution.” Margo Georgiadis

The Oracle Segment Advisor identifies segments that have space which can be reclaimed. However, the Automatic Segment Advisor can be at times resource consuming and even slow down your database:

Why is this happening granted the Automatic Segment Advisor does not analyze every database object? Here is how it works internally: the advisor examines the database statistics, it samples segment data, and then selects the following objects to analyze:

  • Tablespaces that have exceeded a critical or warning space threshold
  • Segments that have the most activity
  • Segments that have the highest growth rate

In addition, the Automatic Segment Advisor evaluates tables that are at least 10MB in size and have at least 3 indexes to determine the amount of space saved if the tables are compressed with advanced row compression.

Now, here is the important part: if a database object is selected for analysis by the advisor but the maintenance window expires before the advisor can process the object, the object is included in the next Automatic Segment Advisor run. So, at one point you may come to a situation where lots of objects have to be analyzed. During the maintenance window, the following clients/task are being run, these are the predefined automated maintenance tasks:

Sometimes, it makes sense to disable the auto space advisor as you cannot change the set of tablespaces and segments that the Automatic Segment Advisor selects for analysis. You can, however, enable or disable the Automatic Segment Advisor task, change the times during which the Automatic Segment Advisor is scheduled to run, or adjust automated maintenance task system resource utilization. Especially in a situation like this:

Often, also ADDM may point out to the problem with the Automatic Segment Advisor task:

Here is how to disable (and enable) the tasks individually. The main switch is controlled by DBMS_AUTO_TASK_ADMIN.DISABLE().

Even after disabling the entire autotask job by running DBMS_AUTO_TASK_ADMIN.DISABLE(), the DBA_AUTOTASK_TASK will still show autotask client as enabled. For this you need to disable the jobs individually as shown above. And for a multitenant environment, CDB and PDBs have their own autotasks, disabling CDB’s autotask will not affect the PDBs, so you will have to do for each pluggable database.

Staring with 12.2, there is a parameter called ENABLE_AUTOMATIC_MAINTENANCE_PDB that can be used to enable or disable the running of automated maintenance tasks for all the PDBs in a CDB or for individual PDBs in a CDB. Changing ENABLE_AUTOMATIC_MAINTENANCE_PDB in the CDB root from TRUE to FALSE, the new value FALSE takes effect in the root and in all the PDBs in the CDB.

If you get into a situation where the Automatic Segment Advisor is consuming lots of resource and slowing the database during the maintenance windows do one of the following:

  1. Disable the autotask client for the segment advisor and run it manually on per need basis.
  2. Increase the maintenance window from the default which starts at 10 p.m. on Monday to Friday and ends at 2 a.m. Often 4 hours in just not enough. The weekend window is 20h long and in most cases long enough.

In the autonomous database, you have access to dba_autotask_client, etc. and you can disable and enable the auto space advisor task however you do not have full visibility on dba_scheduler_window_groups, etc. Some columns just show as NULL. Still the performance task in ADB can give you some idea of what is going on:

And finally, here are some other situations that might require disabling the Auto Space Advisor Job:

  • ORA-01555 While Running Auto Space Advisor Job on Object wri$_adv_objspace_trend_data (Doc ID 2576430.1)
  • Auto Space Advisor is Taking More Time due to Recursive Query Taking a Long Time (Doc ID 2382419.1)
  • SEGMENT ADVISOR not working as expected for LOB or SYS_LOB SEGMENT (Doc ID 988744.1)

The new CS_SESSION package and DB_NOTIFICATIONS view in the Autonomous Database

In Autonomous, DBA, Oracle database on January 20, 2022 at 09:19

Two new objects have been recently introduced in the Oracle Autonomous Database on Shared Exadata Infrastructure.

1. The first one is the CS_SESSION package.

When you open a connection to the Autonomous Database, that session is assigned a consumer group. But consumer groups affect the concurrency and the DOP (degree of parallelism). HIGH service is run in parallel while LOW service runs in serial. So for example, if you have a PL/SQL procedure or function that has to execute statements in the same session with changing consumer groups, the way is to use the new CS_SESSION.SERVICE_NAME procedure.

The number of concurrent statements run in the database depends on 2 factors: the type of the service name and the numbers of the Oracle CPUs provisioned for the ADB:

The new package CS_SESSION contains only one procedure: SERVICE_NAME. My current version is 19.14, I am not sure if the package/procedure is included in all lower RUs. Most likely it is.

The valid values accepted as an IN parameter are: HIGH, MEDIUM, LOW, TP and TPURGENT.

After running the procedure, Oracle does not reset any session attributes. Anything the user set for their session before calling this procedure will continue as-is. 

The ADMIN user is granted EXECUTE privilege on CS_SESSION with GRANT OPTION. The privilege is also granted to DWROLE without the GRANT OPTION.

Here is how to change the consumer group from LOW to TPURGENT:

You might get the following error:

ORA-02097: parameter cannot be modified because specified value is invalid ORA-01031: insufficient privileges ORA-06512: at “C##CLOUD$SERVICE.CS_SESSION”, line 142

Note there is an open bug for this.

You can create an AFTER SET CONTAINER trigger if you would like to limit the values a user can set for the consumer group session. You might not want that all users can set to TPURGENT but allow them go with TP only.

2. The second new object is the DB_NOTIFICATIONS view which stores information about maintenance status notifications and timezone version upgrade notifications for the ADB instance. Let us describe the view

and check what is in it:

As we can see there was a patch run on the data dictionary and there is another maintenance scheduled for the 19th of January.

3. Good to know that we can now use GitHub Raw URLs with DBMS_CLOUD APIs to access source files that reside on a GitHub Repository.

4. Also, the DBMS_PIPE package is now available in the Autonomous Database.