Storage

Cloud Storage

Concept

  • Here is the types of storage class

  • An ACL is a mechanisms used to define who has access to your buckets and objects, as well as what the level of access to have.

  • The maximum number of ACL entries you can create for a bucket or object is 100.

  • The scope defines who , and permission defines the operation that they can do

  • Using signed URLs for granting limited time access tokens that can be used by any user instead of using account based authentication for controlling resource access.

  • You create a URL, the grants read or write access to a specific cloud storage resource, and specifies when this access expires.

In Cloud Storage, objects are immutable, which means that an uploaded object cannot change throughout its storage lifetime.

  • When Object Versioning is enabled, you can list archived versions of an object, restore the live version of an object to an older state, or permanently delete an archived version as needed.

  • You can assign a lifecycle management configuration to a bucket. The configuration is a set of rules that apply to all objects in the buckets. When an object meets the criteria of one of the rules, Cloud Storage automatically performs a specified action on the object. E.g: downgrade the storage class of objects older than a year to Cloud line storage

  • Transfer Appliance is a hardware appliance you can use to securely migrate large volumes of data from hundreds of terabytes up to one petabyte to Google Cloud without disrupting business operations.

  • The Storage Transfer Service enables high-performance imports of online data.The data source can be another Cloud Storage bucket, an Amazon S3 bucket, or an HTTP, HTTPS location.

  • Offline media import is a third-party service, where physical media such as storage arrays, hard disk drives, tapes, and USB flash drives is sent to a provider who uploads the data.

Operation

  • Create a bucket

gsutil mb -p PROJECT_ID gs://BUCKET_NAME
gsutil mb -p PROJECT_ID -c STORAGE_CLASS -l BUCKET_LOCATION -b on gs://BUCKET_NAME
  • List and delete the buckets

gsutil ls -p PROJECT_ID
gsutil rm -r gs://BUCKET_NAME
  • Upload object to bucket

// Upload single object
gsutil cp OBJECT_LOCATION gs://DESTINATION_BUCKET_NAME/
// Download object
gsutil cp gs://BUCKET_NAME/OBJECT_NAME SAVE_TO_LOCATION
  • Transfer bucket through api

POST https://storagetransfer.googleapis.com/v1/transferJobs
{
  "description": "YOUR DESCRIPTION",
  "status": "ENABLED",
  "projectId": "PROJECT_ID",
  "schedule": {
      "scheduleStartDate": {
          "day": 1,
          "month": 1,
          "year": 2015
      },
      "startTimeOfDay": {
          "hours": 1,
          "minutes": 1
      }
  },
  "transferSpec": {
      "gcsDataSource": {
          "bucketName": "GCS_SOURCE_NAME"
      },
      "gcsDataSink": {
          "bucketName": "GCS_SINK_NAME"
      },
      "transferOptions": {
          "deleteObjectsFromSourceAfterTransfer": true
      }
  }
}

Cloud SQL

Concept

  • It is fully managed database service for MYSQL, Postgre SQL

  • Cloud SQL delivers high performance and scalability with up to 64 terabytes of storage capacity, 60,000 IOPS, and 624 gigabytes of RAM per instance.

  • Through synchronous replication to each zone's persistent disk, all writes made to the primary instance are replicated to disks in both zones before the transaction is reported as committed.In the event of an instance or zone failure, the persistent disk is attached to the standby instance and it becomes a new primary instance.

  • Cloud SQL also provides automated and on-demand backups with point-in-time recovery.

  • Allow import and export databases using MySQL dump or import and export CSV files.

  • Cloud SQL can also scale up, which does require machine restart or scale-out using read replicas.

  • Here is the connecting diagram of Cloud SQL

  • Here is the diagram of choosing Cloud SQL instead of using vm

Operation

  • Create instance

gcloud sql instances create INSTANCE_NAME \
--cpu=NUMBER_CPUS \
--memory=MEMORY_SIZE \
--region=REGION

// Set the admin password
gcloud sql users set-password root \
--host=% \
--instance INSTANCE_NAME \
--password PASSWORD

// Start the instance 
gcloud sql instances patch INSTANCE_NAME \
--activation-policy=ALWAYS

// Stop the instance
gcloud sql instances patch INSTANCE_NAME \
--activation-policy=NEVER

// Delete the instance
gcloud sql instances delete INSTANCE_NAME
  • Connect to SQL instance

// From cloud shell
gcloud sql connect INSTANCE_ID --user=root
// From local machine or vm
mysql --host=INSTANCE_IP --user=root --password
  • Export Data

gcloud sql export csv INSTANCE_NAME gs://BUCKET_NAME/FILE_NAME \
--database=DATABASE_NAME \
--offload \
--query=SELECT_QUERY

Cloud Spanner

  • Offers high availability, horizontal scalability and configurable replication

  • The database placement is configurable, meaning you can choose which region to put your database in. The replication of data will be synchronized across zones using Google's global fiber network.

  • Provide petabytes of capacity

Cloud Firestore

Concept

  • It is a fast, fully managed, server less, cloud native, NoSQL, document database

  • Its client libraries provide live synchronization and offline support and it's security features and integrations with firebase and GCP accelerate building truly server less apps.

  • It supports acid transactions so if any of the operations in the transaction fail and cannot be retried, the whole transaction will fail.

  • Also with automatic multi region replication and strong consistency, your data is safe and available even when disasters strike.

  • Datastore Mode: operate in Datastore mode, making it backwards compatible with Cloud Datastore, you can access Cloud Firestores, improve storage layer while keeping Cloud Datastore system behavior, Transactions are no longer limited to 25 entity groups, rights to an entity group are no longer limited to 1 per second

  • Native Mode: access all of the new Cloud Firestore features

Operation

  • Export Data to Cloud storage

gcloud firestore export gs://[BUCKET_NAME]

Cloud Bigtable

  • It is a fully managed NoSQL database with petabytes-scale and very low latency.

  • Ideal for Ad Tech, finTech and IoT

  • easy integration with open source big data tools

  • Stores data in massively scalable tables, each of which is a sorted key value map.

Last updated

Was this helpful?