Home / Educational Content / Database & Technology / SELECT Journal / Exadata – Part 2: Implementation Strategies

Exadata - Part 2: Implementation Strategies

Exadata-Implementation-Strategy

By Umair Mansoob | Edited by Mike Gangler

Learn how to implement Oracle Exadata Database Machine without any previous implementation experience and get the most out of your Exadata Machine. This article describes an Exadata implementation strategy made up of five implementation phases: planning, migrating, optimizing, testing and final cutover.

Introduction

The Oracle Exadata Database Machine was first introduced by Oracle in 2008 and it has become one of the most popular database platform to host Oracle databases. Exadata Machine is like a mini data center in itself, comprising of Database Servers, Storage Servers, InfiniBand Switches and Ethernet switch working together to deliver outstanding database performance.

Oracle customers have and are currently implementing the Oracle Exadata on their own. Primarily, implementers insist on doing it themselves because they are unwilling to spend additional money on hiring an Oracle Exadata expert. Exadata is a robust platform with a substantial purchase cost, making the Exadata a significant investment. It makes sense to make the best use of that investment.

However, hiring an Oracle Exadata Expert can make good financial sense: An expert can improve the learning curve for an organization’s experienced team of DBAs. Plus, if you don’t follow a proper implementation strategy, you’re likely to end up doing just a database migration and not taking full advantage of all the Exadata’s features.

It is a mistake to treat the Exadata Machine like any other hardware, doing so will not fully utilize the Exadata’s great feature, including storage indexes, smart scan, hybrid columnar compress and offloading. Give careful consideration on how best to utilize the power of the Exadata’s features to help customers achieve the best ROI (Return on Investment).

Implementation Strategy Overview

The proposed Exadata implementation strategy is made up of five implementation phases:

  1. Planning
  2. Migrating
  3. Optimization
  4. Testing
  5. Final Cutover

Overview

Each Exadata implementation should start with planning, then move to migrating data using suitable migration methods that can take advantage of the Exadata’s features. Once data migration is completed, look into implementing Exadata best practices and features to achieve optimal performance from your Exadata Machine. Then each migration should go through multiple testing cycles before the final cutover to ensure there are no issues and allow for performance tweaks.

Exadata Implementation Phase 1: Plan

Plan

During the planning phase, the current system which is targeted for migration should be analyzed in detail. Collect information, such as:

  • Database size
  • Application type
  • I/O throughput
  • Memory footprint.

Based on your analysis, make important deployment decisions, such as:

  • Do you need to virtualize the Exadata Machine?
  • Do you need to implement resource management?
  • What ASM Redundancy level should you choose?
  • How large should be your Flash Recovery Area (FRA) be?
  • What data migration strategy should be used?

The data migration strategy should be discussed in the early phases of Exadata implementation planning. Map out each database migration with a particular migration method like Golden Gate, Export/Import or Data Guard based on requirements.

Data migration requirements include downtime tolerance for the migration, Exadata features that will be used, performance level requirements, etc. You will need to socialize with business users and other stakeholders participating in the upcoming migration so they can plan and understand the migration strategies, outage requirements and any application connection changes that may be required.

High availability requirements based on Service Level Agreements (SLA) should also be discussed early. The Oracle Exadata does come with Oracle Real Application Cluster (RAC) high availability option, but if you’re migrating from a non-cluster, you will need to verify that the application will work with RAC and ensure your testing plan incorporates testing specific to RAC. Testing is also an important part of Exadata Implementation; make sure to discuss testing options with stakeholders and create detailed testing plans for all applications.

Exadata Implementation Phase 2: Migrate

Migrate

There are two common migration methods to migrate databases to Exadata Machine:

  • Logical Migration
  • Physical Migration

Each migration method has its own pros and cons, so analyze them carefully based on your requirements.

Logical Migration

Data migration using Data Pump, Golden Gate, and logical standby are considered logical database migration. These migration methods can be particularly useful if you want to create an Exadata database using DBCA Template with all the Exadata best practices built into it. Logical migrations can also be useful when you are upgrading database version during migration and going from Big Endian to Little Endian format. Here are brief descriptions of logical migration methods available and their benefits:

  • Logical Standby: A logical standby database is an option when the physical structure of the source database doesn’t have to match with the target database. The Oracle Logical Standby migration technique is generally best known for the following benefits: minimal downtime, adjusting ASM AU size and implementing some physical changes to the database during migration.
  • GoldenGate: This strategy is known for its transition flexibility. The database can come from any platform, any endian format or any database version to the Exadata. GoldenGate offers migration to Exadata with minimum downtime, cross-platform migration, a zero data loss, and a full fallback plan. It also allows for database changes that can improve performance on Exadata such as partitioning and Hybrid Columnar Compression implementation as part of the migration. It also allows the ability to utilize Real Application Testing (RAT) in conjunction with Flashback Database to do a wide variety of testing on Exadata while still keeping the database in sync with the source for your migration, allowing incremental adjustments without impacting cutover timelines.
  • Data Pump: The Data Pump approach is a common and widely used strategy for migrating Oracle databases. With data pump, you can migrate almost any Oracle version, from any platform to Exadata. This is can also be a good method if you want to compress or partition your tables during migration. It provides you the migration benefits of Simple Migration, Full Data Type Support, and Cross-Platform Support. However, for a larger database, it can mean larger amounts of downtime for the migration.

Physical Migration

Physical migration means block by block copy of the source database to the Exadata Machine. Like logical migrations, it has its own pros and cons, and, in some cases, this method can be very useful. This is a migration method suitable for customers who are not planning to introduce any new database features like compression or partitioning as part of the migration and intend to introduce new features after the migration.

The one major concern about using the physical migration method is you will be bringing in all the characteristics of a source database. Exadata comes with its own set of best practices and your source database might not be in line with them. I strongly suggest running Exacheck utility after physical migration, which is design to evaluate hardware and software configuration, MAA Best Practices and database critical issues for all Oracle Engineered Systems. Below are brief descriptions of physical migration methods available to you and their benefits:

  • Physical Standby: The physical standby database typically requires little downtime; you simply need to create and configure the Data Guard to the target Exadata database machine. When ready, you simply have to perform the switchover for complete the migration. This strategy works well using the same database release, though it comes with cross-platform limitations. It provides a migration with benefits such as minimum downtime, quicker cutover to the Exadata and simplicity.
  • Transportable Database (TTE): This is a solid strategy for migrating towards from a different platform with the same endian format. It provides you the migration benefit of simplicity.
  • Transportable Tablespace (TDB): This works best when you want to migrate to a platform with a different endian format and a different release. It provides you the migration benefits of simplicity and support for cross-platform migrations.

Exadata Implementation Phase 3: Optimize

Optimize

If you want to take full advantage of the Exadata Machine, you should look into implementing database/Exadata Features. This is where an Exadata Expert can come in handy.

First Compression not only reduces your storage footprint; it can also improve performance. Partitioning can also improve performance when implemented properly and provide you maintenance advantages and increased availability. Parallel execution can help you with performance, same goes for properly caching tables to Exadata Flash Cache. Though offloading and smart scan are enabled by default, you need to make sure they are happening in your testing and that you are taking full advantage of them.

The Exadata Machine does a great job of managing resources, but if planning to use Exadata as a consolidation platform, you should look into resource management through Database Resource Manager (DBRM) and I/O Resource Manager (IORM) and determine if they will be needed. Here is a brief description of Exadata and database features and how they can play a key role in achieving extreme performance from your Exadata Machine.

  • Compression: Even without Exadata, the Oracle database has two native compression types: basic table compression and OLTP Compression. Basic table compression does not support DML operations and still be able to maintain compression, therefore, limits where it can be used. OLTP compression supports DML operations and maintains compression, therefore can open more options on where it can be utilized.

On the Exadata, you can additionally use hybrid columnar compression where you can get extremely good compression ratios, but like Basic table compression, it does not support DML operations and still maintain compression. Therefore, it is best used in conjunction with partitioned tables by compressing older data partitions. Compression saves very expensive Exadata storage, and it is also known to improve performance significantly and provide better utilization of the database buffer cache.

  • Partitioning: Oracle supports many types of partitioning techniques including range, interval range, reference, list, and hash. Partitioning can help you achieve better performance through partition pruning. Partitioning can help maintenance by giving the ability to perform certain maintenance tasks, such as truncate, gathering stats and index rebuilds, for a single partition or groups of partitions instead of the whole table, providing ease of maintenance and higher availability.
  • Parallelism: You can execute queries in parallel to speed up your loads and queries. If you are not already using the parallel query feature, you should look at introducing this feature during or after the migration. You can enable parallel query execution at the object level or at the SQL level using a hint.

You can let Oracle determine the degree of parallelism based on a set of criteria and some database instance initialization parameter settings. This feature is called Auto Degree of Parallel (Auto DOP), and it will automatically parallelize your queries based on thresholds. The threshold that is prominently used is set by parallel_min_time_threshold. The default of this parameter is 10 seconds. If you want to run more statements in parallel, make sure to reduce that number so that more plans can qualify for parallel evaluation.

  • Flash Cache: Exadata comes with terabytes of flash cache, also called smart flash cache, because it has an ability to move data in and out from cache based on usage. The flash cache is enabled by default, so there is no special configuration required for it to be used. You can turn it off if you want, or encourage caching an object via an alter table statement. The flash cache also comes with a ‘Write back’ option, which provides the ability for write I/O to use the flash cache in addition to read I/O. If you have write intensive applications and find significant “free buffer waits” or high I/O write response times, then write back flash cache can be a suitable option.
  • Offloading/Smart Scan: Some of Exadata’s extreme performance is archived though offloading and smart scan. Offloading means some of Oracle processing is offloaded to the Exadata Storage nodes which takes some load off the database server. Oracle processes that can be offloaded to storage nodes include incremental backups, datafile creation, decompression, and decryption. Oracle smart scan refers to the Exadata’s capability of performing projections and predicate filtering operations at the storage level, which means the storage layer will only return required rows and columns back to the database nodes.

By filtering data at the storage cell, it reduces I/O and network traffic between storage servers and database nodes and reduces the amount of processing at the database node for a result set. There are some prerequisites for smart scan like direct path read and full table scan, so make sure smart scan is happening for your database to take advantage of the offloading to the storage nodes where you can.

  • Resource Management: If you are planning to consolidate databases to the Exadata platform, you might want to look into implementing some level of resource management.  Resource Management can allow you to get consistent performance across different workload and databases within the Exadata. You can use Oracle native resource management tool called Database Resource Manager (DBRM) to manage CPU utilization, parallel queueing and long-running queries and you can use IO Resource Manager (IORM) Exadata native utility to manage I/O throughput and latency.

Exadata Implementation Phase 4: Testing

Test

Testing, testing, and testing. Testing is the most important part of an Exadata implementation strategy. You should have a testing plan ready long before you even start any migration processing. There are many types of tests that should be performed, such as unit/functional, as well as performance/load testing, environment break testing and, in cases of DR and HA, failover testing.

All applications and critical processes need to be identified and thoroughly tested to give the migration the best chances of success. If you are planning to introduce new features, like compression and/or partitioning, make sure to include special testing for those changes. Though these features are designed to improve database performance, these changes can cause SQL queries to behave badly when not implemented properly. Capture performance stats using tools, such as AWR, ASH and SQL Performance analyzer, and compare it with your baseline and source results.

AWR reports will provide you all the details you need to compare elapse time, I/O wait and CPU utilization. Also, compare critical processes and queries using ASH reports; it will provide you further details about execution plans and wait times. This can all be simplified using Real Application Testing (RAT) to give you the best workload testing possible based on a capture of the current environments exact load. You will also want to validate Exadata configuration through running Exacheck and remediate any issues you encountered during this phase.

Exadata Implementation Phase 5: Cutover

Cutover

The final implementation phase is the cutover. Backup both source and target databases and have a fallback plan, just in case you encounter any issues after the cutover. If you are migrating a customer-facing critical database, your fallback plan should include syncing data back to source database using replication technologies.

Depending on the migration method and maintenance window, you will probably have to sync your target database just before the cutover. You will need to perform data validation, especially if you have used any methods other than GoldenGate or a physical standby, to sync source and target databases. You should be on guard for the next 48 hours and ready to remediate any issues.

Conclusion

Described above is a proven Oracle Exadata implementation strategy. If you follow the strategy step by step, you will have a greater chance of success when implementing Exadata with all its advantages. Without a proper implementation strategy, you will be just performing data migration and likely miss out on the opportunity to implement extraordinary Exadata Machine features. Good luck!

Exadata - Part 2: Implementation Strategies