Home / Educational Content / HCM Cloud / Unleashing the Power of Oracle Cloud HCM Extract: A Journey into Outbound Integration

Unleashing the Power of Oracle Cloud HCM Extract: A Journey into Outbound Integration

In the realm of Human Capital Management (HCM), the Oracle Cloud HCM Extract emerges as a beacon of versatility, serving as a dynamic tool for generating vital data files to meet outbound integration needs. Imagine a world where extracting, archiving, transforming, reporting, and delivering HCM data is not just a necessity but a seamless process. Welcome to the realm of HCM Extract!

What is HCM Extract?

At its core, HCM Extract empowers users to tailor data extraction according to their specific requirements within Fusion HCM. Whether it’s generating PDF payslips for employees or seamlessly transferring payroll and benefits data to third-party service providers, HCM Extract stands as a pivotal conduit.

Unlocking Data Flexibility with HCM Extract

With HCM Extract, you have the power to generate data in any format you need from Fusion HCM, catering to both ad hoc and non-ad hoc reporting needs. By defining specific parameter values during extraction, you can handle large volumes of data efficiently.

As mentioned, the output can be customized into various formats such as PDF, CSV, XML, and Excel, and can be delivered through FTP, email, and other channels. Integration with BIP facilitates the use of specific BI templates and Bursting requirements seamlessly.

Moreover, HCM Extract allows you to extract historical data and incremental changes, enabling you to schedule extractions and examine log files for debugging purposes effectively.

Extract Components & Hierarchy

The Extract definition components fall into a specific hierarchy, as displayed in the graphic below:

a diagram showing the different components of an extract definition

The extract definition comprises five key components: parameters, data groups, records, attributes, and delivery options.

To define an extract, start by specifying the parameters, including details like the effective date for changes only. Next, outline the data groups, which serve as the fundamental building blocks of the extract and determine the type of data to be extracted.

Following the data groups, define the records, which provide basic details and location information. Attributes then come into play, specifying column-level details such as a person’s first name or identification number.

Finally, establish the delivery options, choosing from methods like email, FTP, UCM server, or other available modes to send out the extract.

The example below shows a sample extract’s hierarchy:

An illustration of a sample extract's hierarchy

In the provided sample extract, there are two distinct data groups: the Employee Data Group and the Address Data Group.

Within the Employee Data Group, you’ll find two records: Employee Basic Details and Allowance Details. These records encompass specific attributes; for example, Employee Basic Details may include attributes like first name, last name, and SSN, while Allowance Details might include attributes such as car fuel allowance and housing allowance.

Moving to the Address Data Group, you’ll encounter an address record containing attributes such as street name, city, and country.

For further understanding of this hierarchical structure and the significance of each block, record, and attribute, please view the walkthrough video below.

Extract Design Process with Checkpoints

When designing extracts, it’s crucial to adhere to five key checkpoints:

  1. Begin by defining the extract name, type, and parameters.
  2. Proceed to specify the user entity, records, and attributes.
  3. Establish links between the data groups and the Root Data Group, then generate and compile fast formulas associated with the records.
  4. Create and import your Word template into BI Publisher to generate a report.
  5. Conclude by defining the delivery options, including output type and delivery mode.

An illustration showing the 5 important checkpoints in extract design

Phases of the HCM Extract Process

The HCM Extract process unfolds across five distinct phases:

  1. Preprocessing Phase: HCM Extract initiates by executing the threading data group, scanning through to pinpoint all valid extraction objects.
  2. Moving into the sub-process phase, each thread retrieves objects from the Object Actions table based on the Chunk Size configuration. The data is then archived according to the extract design beneath the threading data group.
  3. Root Archive Phase: This pivotal phase sees HCM Extract archiving data above the threading data group, commencing from the Root data group and progressing downwards.
  4. BIP Job Delivery: With data archived and threaded, the process transitions into BIP job delivery. Raw Extract XML creation begins, with each BIP-based delivery option triggering a corresponding job. The process halts until all BIP jobs conclude.
  5. XML Generation Phase: Upon completion of XML generation, the Archive proceeds to trigger BIP jobs, awaiting their completion before further action.

These phases delineate the journey of HCM Extract, from preprocessing to XML generation, ensuring a structured and efficient data extraction process.

A diagram illustrating the HCM Extract process phases

HCM Extract Troubleshooting

During Extract Run

  • Download all log files from Data Exchange > View Extract Results
  • Check for error details in “Archive and General XML” log files
  • Run “Extracts Process Diagnostics Report” process from Scheduled Extracts and review the output file for error details
  • If the error is encountered at the delivery option phase and you are using BI report as your delivery option and you notice that the error is happening in BIP process ID in log/diagnostic file:
    • Navigate to BIP and find the report attached to the delivery option. Click on More > History. In the Report History page, remove the value in the Owner field. Make it blank and click on Search.
    • Click on the latest Report Job Name in the search results and select the Error/Problem Icon in the status column. Add the error/problem information shown in the popup to the bug.

Performance Issues

  • Run the “Extracts Process Diagnostics Report” process from Scheduled Processes and review output file for details.
  • Identify which phase is taking the most time (Sample report shown below)

Screenshot of an Extract Run demo showing timing summary

According to the sample data, Archive Process is taking up 67% of the total time. This means it’s related to the Extract definition.

Timing

Here are common timing issues and how to solve them according to the process:

  • Preprocessing – Use the threading database item to review the fast formula or the array DBI. Another option is to optimize filter criteria to improve throughput.
  • Archive Processes – Number of process threads, volume of data processed, usage of array DBI and functions in rule formula may impact the run time for this phase. Review the user entities you use in your extract definition.
  • Root data group extract – Processes the data for the data groups above the data group with the threading DBI in the extracts data hierarchy. Improve the throughput by reviewing the filter criteria or criteria formula.
  • Formatting in BIP – The volume of data being formatted, template design, and grouping & sorting functions in the reports template impacts the throughput.
  • XML Delivery to UCM – Delivers the generated XML to the content server. Use compress option to improve the throughput.

Changes Only Extracts

Changes Only Extracts are basically incremental extracts which require sending incremental data on a daily basis to third party applications. The current extract will always compare data to the previous extract that was run. If the previous one was yesterday, it will compare today’s data with the previous snapshot to produce incremental output and ask for the delivery option.

Points to Verify for Changes Only Extracts:

  • Changes Only extract definition should have threading DBI for the root data group
  • If Getting Duplicate Data OR Changes Only produces output even if there is not change, check the following:
    • Potential design issues
      • Multi-threading DBI chosen is not unique for the data group. Configure the correct multi-threading DBI or provide the filter that will ensure the threading object value is unique for the data group.
      • Check to see if you marked Key Attribute wrongly for the data group
    • If there is still an issue, you can check the Changes Only diagnostic utility with View Extract Results UI to understand which attributes are responsible for extraction
      • Go to Data Exchange > View Extract Results. Search for your run
      • Download Extract Output XML file and get OBJECT_ACTION_ID for data that is being extracted

If you’re getting person data, and there’s no change in the data but data is coming in, you’ll want to find out where that data is coming from. Below is an example:

Screenshot of an Extract Run demo showing an object action of a person's record

For that object action ID, you can see which attribute was changed due to which record was picked in the Changes Only output.

Let’s remember: integration is not just about data transfer; it’s about weaving a seamless narrative that connects systems, people, and processes. With Oracle Cloud HCM Extract, you can achieve integration excellence and innovation.

This post and video clip are from the 2023 Cloud HCM Week presentation: HCM Extract – Outbound Integration Functionality & Troubleshooting, presented by Prateek Shukla, Senior Principal Software Engineer, Oracle. The full session is available in the Quest Learn Library, under Recordings & Presentations.

Unleashing the Power of Oracle Cloud HCM Extract: A Journey into Outbound Integration