By April Sims | Edited by Tim Boles
Our university has been live in a Cloud environment since December 2016. This is the story of how we got there. This paper outlines the major factors that shaped the project in terms of scope, hardware/software decisions, coordinating local/remote services and security. This paper will not contain all of the technical details to keep it applicable to more environments.
Certain conditions made this jump to a Cloud environment the right decision at the right time. Our ERP was changing technology, requiring a change-out of the entire application stack. We were experiencing personnel burnout due to a proliferation of hardware servers in our environment, as well as the lack of a proven disaster-recovery plan. Our success was based on the adage of no modifications for the new version of our ERP. This freed us to proceed with this project quickly, migrating in about six months from local to a complete Cloud IaaS environment for our IT operations.
It has been my personal experience in talking with other IT departments that are striving to implement a cloud-based project that they are encountering internal resistance related to such a large change. Most of that resistance is a feeling of the lack of control by handing over/giving up duties or activities to an unknown/unseen person or entity. After going through the process, our team believes that this is the future of IT, very similar to other sharing technologies that have shaken up how companies utilize resources.
Technical and monetary constraints seem to make some of the most important decisions easier or harder. That being said, moving to a cloud environment can be expensive if your estimates for services are not accurate. Once we realized that the cost of using reserved instances on a three-year schedule was comparable to the budget outlay for our local hardware cycle of three years, the project was approved for the proof-of-concept stage.
During our initial proof-of-concept testing, there were several seconds of delay when the database and application servers were located in different environments (local vs. cloud). There was also a slowness during testing that we realized was due to using the smaller instances during proof-of-concept. Performance returned once the cloud instance was sized appropriately for the workload and all infrastructure was co-located to the cloud environment.
Preparation and Costing
Our team started by taking inventory of current resources – network, hardware, software, personnel. As a team, we looked at moving each of the separate components to a cloud environment divided into the following software types: Databases, Application Servers, LDAP Authentication (CAS), Portal, DNS, Batch Server, Network Replacement, Log Capture/Archiving, Load Balancing, and DevOps.
At several brainstorming sessions, a spreadsheet was started, detailing the infrastructure needed to duplicate our environment. This spreadsheet was tied directly to the cloud computing formula for calculating costs in a cloud environment. All preparation and planning were directly communicated via committee and not transferred/translated by management; this gave our team strength and buy-in from each member. Costs were freely communicated via all staff as part of the buy-in strategy, as each person could see for themselves what was economically feasible. We believe that the software inventory/pricing seemed to be the main driving impetus for technology selection, as Cloud vendors have different licensing options.
Tracking/Collaboration tools were instrumental in getting this project done and performed in real-time. Interactive spreadsheets, documents, chatbots and project tracking (we used Trello for keeping tasks organized) — this also reduced the need for management intervention as everyone could access all of the data/information at any point of the project.
While the ability to scale horizontally and vertically is widely touted for cloud implementations, this can also be the most expensive method of Cloud deployments. We chose several years of reserved instances instead of the freely scalable option. That being said, these reserved instances were purchased after a month-long test of non-reserved instances to make sure that the actual production workload was within acceptable parameters. There are several costs to Cloud environments that are all part of the project: instance types (reserved vs. on-demand), software licenses, storage and the dedicated connection type back to your local environment.
One way we reduced software costs was to use OS software provided by the Cloud vendor. While this choice does reduce costs, we will occasionally come across small issues that are specific to these Unix flavors, such as feature availability. The trade-off has not been a problem for our system administration team because it is easier to maintain than previously local environments.
We started this migration to the cloud project confident it was going to succeed, and nothing told us it wouldn’t work. That naïveté aside, the Proof-of-Concept stage of the project was the most critical for success; this is where most of the knowledge transfer occurred. Most of that transfer occurred by simply reading the cloud provider manual, several individuals attended conference sessions to flesh out the main concepts, and additional training came from another university using the same technologies to fill in some of the missing pieces. We have published an IT Culture Deck of our core values that were demonstrated/fleshed out during our cloud implementation project.
My experience has shown that Oracle provides one of the best tools for making that transition easier — DataGuard, also known as a Physical Standby. We used this method for migrating the database to the cloud environment as well. The cloud environment was a different UNIX version/flavor and version running on a VM, and local environment was dedicated storage but testing a full physical standby switchover assured us that these factors would not hold up the project. There were a few ignorable JAVA display errors when using the GUI-based Oracle installation that didn’t affect any functionality.
The Oracle version stayed the same, which assured our success for the manual switchover process of physical standbys — the manual method was chosen to maintain control, as all IT systems were also switched over to the cloud at the same time as the database. There were a few connectivity issues for ancillary systems that were ironed out, keeping the outage to a few hours, which was done during the winter break in our university schedule. The next month was spent monitoring expenses, performance, and usability to make sure of our Cloud instances selection, which allowed us to pre-pay for three years of reserved instances with confidence.
Post Cloud Switchover
Our team’s work wasn’t done once the switchover was completed. There were a few more systems that were still local that needed to be transferred to the cloud environment. In retrospect, our team feels that the timing was the critical reason for our success. Other universities going through the same ERP software upgrade have already migrated to local hardware instead of using this as a springboard to the Cloud. This means they have spent additional money on acquiring resources in both environments. Our success at this process has been duplicated by several other universities which are similar in size, which gives credence to the adage that a smaller institution can be more agile. Yes, our IT team is very small, as compared to other universities of the same student count, with minimal management layers, which was key to our success.
We also had several mandates that were resolved/implemented by moving to a cloud environment: lack of a comprehensive disaster recovery plan, Expand Two-Factor and SSO, Upgrade CAS, Git to coordinate rollout of Docker instances, outsource any third party applications that didn’t fit technically while reducing the workload of the existing IT personnel. We accomplished this project internally with no personnel outsourcing by consolidating servers, outsourcing third-party apps and migrating to a Dockerized application deployment environment. We also utilized automation whenever possible, such as integrating GIT for scripting the deployment of docker containers, which also made the versioning/mods easier to maintain.
As a final note, we were surprised how positive the entire experience was. This can be attributed to the team buy-in from the beginning. All personnel had specific roles and tasks assigned but were allowed to contribute ideas as needed all along the way. Team makeup was the most important part of the project, not the technology that was implemented.
About the Author
April Sims is currently a Southern Utah University DBA with over 15 years’ experience, as well as a member of the IOUG SELECT Journal Editorial Advisory Group and past Executive Editor. She earned an MBA from the University of Texas Dallas and is an Oracle Certified Professional since 8i. She has been a presenter at IOUG Collaborate, Oracle OpenWorld, Ellucian Live and Educause on Oracle-related technical topics for over 12 years. She is leading an informal Ellucian Banner User Group interested in migrating to a Cloud environment.