In this video, take a closer look at relational data as you explore how to work with Cloud SQL for MySQL.
- [Instructor] So the next section for DR is around relational data. And again, I recommend that you take a look at this disaster recovery scenarios for data document. It talks about files. And we briefly looked at that in the previous movie, but it also talks about relational data. So of course, you're going to start with data and database backups. And referring to the first document, the DR series, you're going to try to meet a medium recovery time objective and a small recovery point objective. So how quickly you get your database back up and working and how much potential data loss you have. It mentions here that database backups are more complex than file backups because you're recovering to a certain point in time. Again, this is coming from working many years as an on-prem DBA. There is a quite a lot of complexity in DR for databases. But what is important to understand in GCP is that with the service choices, some of the activities that DBA is traditionally performed on premise can be automated, depending on the solution that you select. Now notice, in addition to backing up and working with your cloud-based service, another aspect of this, if you're working in a hybrid scenario is you could back up to cloud storage from an on-premise relational database systems, such as MySQL. And this gives you information about doing that. Here are the article details DR buiilding blocks that we've covered in previous movies, such as cloud interconnect and cloud storage, tiered storage, to make sure that you're utilizing the service in the most economical way. For example, if you're wanting to perform offsite backups up onto the Google Cloud, have a large amount of relational data that isn't accessed very frequently, an archival style backup. You're probably going to want to use a cheaper storage class, such as Nearline or even Coldline, if it's really archival, and users aren't accessing the data very frequently, if at all, This shows a sample architecture using dedicated interconnect, which is again, a cost that I see customers not factoring in when they make an offsite DR plan. So I want to specifically call out setting up that service which is typically part of the architecture. So, as I mentioned previously you might have the alternative situation where your production environment is GCP. So for data backup, of course, we just talked about using GCs efficiently. Now for database backup, it depends on what type of configuration you have. Although I have worked with some enterprise customers who are utilizing lift and shift, so they're managing their database clusters at the GCE level. And then the mechanisms and methods for DR on data are very similar in the cloud to on-prem. More common, rather, it is that people will move to a partially managed database solutions such as Cloud SQL. And one of the reasons to do that is some of the tasks like backing up can become automated. And this is showing you that lower level architecture. And this is focusing on some of the concepts we've talked about when we were discussing GCE capabilities. Working with images, snapshots, instance templates, so that you can have reproducibility in terms of the service configuration if you're running it on raw VMs. what I prefer to work with on GCP hosted environments are the managed services. So there are a number of different database services. And as I mentioned, we'll be getting into the NoSQL services in different scenarios. We're going to focus here on the most common use case, which is relational data. And that case is Cloud SQL and Cloud Spanner. So we'll start with Cloud SQL. What I've done is I've created an instance with fail-over, just for time purposes, by clicking through the console. And just so that you understand some of the capabilities that are built in here, you have the ability to migrate data, and this migrations can be for other types of scenarios but they also can be relevant in DR scenarios as part of your recovery playbook. You can see that you have the ability with this tool to migrate on-premise to Google Cloud SQL, another cloud provider to cloud SQL, or a GCP project to GCP project. And it basically is a wizard that allows you to connect to the data source, creates a read replica, synchronizes it, and then you can promote it. Now in the configuration I set up, I chose to set up a failover. So if you are configuring mySQL, Cloud SQL instance, you can optionally set up a failover. And notice if you have a failover instance available, then you can click this button to failover. And click trigger, failover. Now, of course you would generally do this with the script, but my point in showing you this is that this capability is built in to the managed mySQL instances that are available on GCP. In addition to this, I've configured automatic maintenance. So while this failover is occurring, you can see that automated backups are enabled, and a backup was taken within the time since I set this up. It took about 10 minutes. The failover is now complete. And you can see that. You can look at the new master and you can look at the replica here. You can then work with the two copies. As you have been notice that you have the capability to look at monitoring on the replica as well. Now, something interesting was announced at GCP Next last week regarding Cloud SQL. Currently, Cloud SQL can run managed mySQL or PostgreS instances. Google announced that coming in the next 12 months, they plan to support SQL server. This'll be an interesting choice for many of my enterprise customers because partially managed SQL server has been a very popular offering on some competitor clouds, notably AWS RDS service. So it'll be really interesting to see the demand from my customers for managed Cloud SQL for SQL server.
- Enterprise concerns
- Enterprise scenarios
- Setting up your organization’s account
- Managing billing
- Enterprise compute services
- Enterprise storage and database services
- Enterprise data pipelines
- GCP developer and DevOps tools