By Jie Zhang, Certified Google Cloud Consultant
Being a software developer these days is fun. Make that AWESOMELY fun! It’s so much fun that sometimes you can get distracted by all the shiny new toys in your developer toolbox, like Google Cloud.
As a systems integrator that builds custom content- and data-centric solutions for customers, we’re always looking to leverage the best tooling to help our customers solve their business challenges.
Let’s take a quick tour of how our work has changed over the past 15 years and how Google Cloud has revolutionized the value strategic consultants like Flatirons can provide to clients for a higher return on investment.
2005 – On Prem, Big Iron, Custom Everything

Image 1
In 2005, Flatirons was developing custom solutions in the publishing vertical. Virtualization was starting to be used in development, but rarely in production. A typical dev/production deployment stack would look like this (see image 1):
Each layer would be provided by a different vendor. Procurement lead time was unpredictable but was typically 6-8 weeks, including budget approval, multiple vendor selection, shipping, hardware and software compatibility between vendors, installation and configuration.
Once the hardware and software arrived, we would work with internal departments to make sure that all corporate policies were applied correctly, confirm patch levels, firewall rules, backups, etc. We would spend hundreds of hours working through the processes, forms, and meetings to get this all setup―distracting from strategic value-add for the customer.
Capex spends would frequently be in the six digits and could cause an entire project proposal to be rejected. Scaling up or down was cumbersome, expensive, and slow to deploy.
2010 – IaaS Matures

Image 2
We did some great work in those five years, but we recognized that we were spending A LOT of time—and clients were paying for—putting complex infrastructure in place rather than addressing functional requirements for solutions to our customers’ problems.
We started leveraging emerging tooling and infrastructure (see image 2).
As we grew, we acquired data centers and started hosting some pieces of our business. In other lines of business, we used 3rd party hosting.
Our development velocity increased but we still faced long and expensive procurement cycles and significant complexity and brittleness developing and deploying solutions.
2015 – Coordinating Computing Sous Chefs
After a few years in the hosting business, we realized that it wasn’t a great space for us to be in – we didn’t have the scale to build a good business model around hosting and it felt like an area where our offerings weren’t competitive – too expensive and not feature-complete.
We started to leverage Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) offerings:

Image 3
By leveraging these emerging technologies and providers, we could focus more of our energy on solving unique customer problems and less time fussing around with database installs, network configurations, data center operations, and so on.
For some vendors, these are the bread and butter of what they do. But for Flatirons, it felt like coordinating a bunch of computing sous chefs. We want to work with our customers to be the head chef to their front of the house maître d’!
2020 – May I Take Your Order?
Today, the evolution continues. To slightly rephrase our analogy, let’s look at computing cloud services from the perspective of pizza – one of the primary food groups for a developer!
In this analogy (see Image 4), you pay for what you need and the service is flexible to suit your needs. If you have a full gourmet kitchen and fresh local ingredients, maybe legacy on-premise still works for you. If you’re overworked and don’t have time to make pizza from scratch, purchasing a pizza and baking at home works for you. If you live in an RV and have no oven, it’s pizza delivery. If you’re backpacking across the country, then you are limited to dining out.
Having access to all these possible services allows us to customize solutions to fit customers’ unique requirements, matching scale, price, performance, and availability.
2020 – A la Carte Services
Now it’s 2020 and the toolbox has exploded! Just look at the tools available to developers for building solutions with Google Cloud.
Yes, the font size is too small to read! And that’s intentional – for many of your solutions, you’ll only use a few of these services.
Now we face the big question: How do you choose what to use? We can give you guidance on how to choose tools as you build and deploy your cloud-based solutions.
Let’s consider how we’re currently building our solutions that have these non-functional requirements:
- Scale up to Internet scale – geographically distributed disaster recovery (DR), five 9s of availability, petabyte storage
- Scale down to zero cost, self-service demo environments while maintaining high performance within a geographical region – small computing, small storage, small networking
- Fast deployments (less than one hour for a customer private full stack, including demo data and applications)
- Modular deployments using containers (Docker)
- Orchestration, self-healing, alarms on resource exhaustion (Stackdriver, Kubernetes)
- Machine Learning
- Image Recognition
- Natural Language Processing
- Flexible query/storage engine
- Normalized and denormalized data
- SQL and NoSQL
- Bulk loader
- Flexible storage engine:
- Tiered storage
- Transparent storage migration
- Audit logging
- Networking
- IDS/IPS
- Highly configurable firewall/rule sets
Google Cloud Platform (GCP) allows customers to scale to the internet, as GCP is built on the same infrastructure that Google internally uses to crawl and index the web, along with all their other services. Since most Google Cloud services are designed to be scriptable, scripts can be written to scale-up and down resources as needed―even to zero.
Compute Resources
Google Compute resources can be thought of as VMs in the cloud. On the Compute Engine (right) side of the chart (see image 6), we provision a VM and control the VM and all the software on the VM. Google takes care of the infrastructure supporting the VM.

Image 6
As we move left, we run containers on VMs. We define containers, Google the VMs…continuing all the way to the left, we use no code applications. We focus on data analysis and Google takes care of everything else.
In our practice, we see that older applications typically use the compute resources shown on the right and more modern, modular applications use compute resources shown on the left side of the chart. This is a critical aspect of the GCP environment – very few customers have a green field environment for their deployments. Being able to support a wide range of compute environments is essential.
Logging & Monitoring

Image 7 | Source: Google
Many of our customers run applications that have stringent audit log requirements and are also extremely price sensitive. As we’ve migrated customers into GCP, we frequently hear concerns, such as: How can I guarantee that all access to my application is logged? How can I monitor system health, and how can I ensure that my cloud deployment costs are well known and predictable?
To answer these questions, we use several GCP components:
- Stackdriver for application and infrastructure observability.
- Cloud Pub/Sub event management to enable high performance audit logging
- CloudStorage and BigQuery to store and analyze access patterns, both to proactively diagnose system resource issues and satisfy regulatory requirements for audit logging
Networking

Image 8 | Source: Google
Many of our customers are geographically distributed and are interested in provisioning systems that span geographic regions, both for performance and availability reasons. Image 8 above shows an example of a GCP networking configuration we have deployed using two VMs (Compute Engines) in the US and Europe.
At the application, container, and virtual node perspective these nodes appear to be on a single LAN. Customers that are deploying data that is governed by HIPAA, SOX, or GDPR considerations are increasingly seeing the value of Google’s construction of its own fiber networks, wherein communication inside GCP never leaves the GCP network.
GCP load balancing can also be configured to handle traffic in the US and Europe by routing to local services when available. This type of routing flexibility provides high availability, application simplicity, and performance optimizations.
Storage

Image 9 | Source: Google
Image 9 is a standard Google decision tree for determining which type of storage service is best suited to an application.
Many of our solutions have sophisticated storage requirements, including:
- Extremely low cost for infrequently accessed data
- Retention policies executed in hardware
- Transparent migration of storage between fast/expensive storage and slow/cheap storage based on policy rules or usage patterns
- Complicated data access requirements, ranging from access to petabyte sized object storage systems to light weight, zero cost prototyping environments.
- Support for a variety of data models – Relational, NoSQL, content addressable object storage, IDMS, IMS, VSAM
In our practice, we’ve come to rely on the availability of this rich set of storage services within GCP. Using such a wide variety of options allows us to optimize the solution we deploy for a customer. There is no one-size-fits-all answer; working with GCP storage tools enables us to quickly and effectively architect a solution without having to reinvent storage-level best practices for every deployment.
Query

Image 10 | Source: Google
Image 10 shows a trivial example of GCP functions that can be used to correlate data from weather stations for use in data analytics. This analysis can vary from using visualization tools such as Jupyter or Tableau to visualize the data as well as applying Google’s machine learning to glean information from the data. A more real-world example is shown in image 11 below, using Jupyter and BigQuery to evaluate opiate prescription patterns for a patient population.

Image 11
Machine Learning

Image 12 | Source: Google
GCP provides several AI functions that we’re exploring to provide actionable insights for our customers. In our health care practice, we’re researching the use of GCP AI tools to enable medical researchers to investigate legacy data going back 20-30 years to evaluate the progression of disease in patients.
For example, an oncology researcher may phrase a request such as “Show me all radiology images that look like DCIS (Ductal Carinoma In Situ) and where the radiologist’s narrative did not reference DCIS. For those patients, show me all patients that developed breast cancer, any comorbidities and their survival rates over 5, 10 and 15 years.”
For us to develop that system in 2010 would have required thousands of hours of work developing and integrating NLP, image recognition, and data query tools. Today, we can provision the core tools in minutes and focus our energies on data analysis, visualization, and training our ML models.
Compliance
Google Cloud Platform has done compliance verifications for many of their products. Given each compliance requirement, Google has validated which service is compliant with the requirements (see image 13).
Of course, compliance is a team effort – the solution using GCP must also be designed correctly to be compliant.

Image 13 | Source: Google
What’s Next? Oh, the Places We’ll Go!
Thanks for taking the tour of our history of adopting cloud computing resources and specifically, Google Cloud services. The advances afforded to consultants and developers like us and to clients like you is truly phenomenal. Gone are the days of spending most of our time on infrastructure and solution environments. Today, thanks to environments like Google Cloud Platform, we spend less time on plumbing and more time focusing on the strategic needs of your business, delivering our clients a higher return on investment.
Want to chat about how you can use cloud services to solve your content or data challenges? Contact us!