Google Cloud Platform | Cloud Academy Blog https://cloudacademy.com/blog/category/google-cloud-platform/ Wed, 17 Jan 2024 13:12:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 Google Unveils Gemini AI https://cloudacademy.com/blog/google-unveils-gemini-ai/ https://cloudacademy.com/blog/google-unveils-gemini-ai/#respond Mon, 11 Dec 2023 08:17:07 +0000 https://cloudacademy.com/?p=57164 On December 6, Google revealed its latest and most powerful AI model named “Gemini”. They are claiming that it represents a significant leap forward in the field of artificial intelligence, boasting capabilities far exceeding any previous model.

The post Google Unveils Gemini AI appeared first on Cloud Academy.

]]>
On December 6, Google revealed its latest and most powerful AI model named “Gemini”. They are claiming that it represents a significant leap forward in the field of artificial intelligence, boasting capabilities far exceeding any previous model.

What makes this AI model different, is that it was built from the ground up to be multimodal.  That means Gemini can understand and process information from various sources, including text, images, audio, video, and code.  It can also transform any type of input into any type of output.  This sets it apart from earlier models that were limited to handling specific types of data.

Capabilities

As a result, Gemini can:

  • Generate text and images: This should result in more engaging and interactive experiences, and even open the doors to new forms of artistic expression.
  • Answer complex questions: With its multimodal understanding, Gemini is able to tackle intricate queries that span multiple domains.
  • Explain complex concepts: Through its sophisticated reasoning abilities, Gemini can break down complicated ideas into easily digestible explanations.
  • Write code: Gemini can understand and generate code in multiple languages, making it a valuable tool for programmers.
  • Surpass human experts: On the MMLU benchmark, Gemini outperformed human experts, demonstrating its superior knowledge and problem-solving skills in over 50 different domains.

Applications

If all this is true, the applications could be almost endless.

  • Science: By analyzing vast amounts of data, Gemini could accelerate scientific discoveries and breakthroughs.
  • Education: With Gemini’s ability to understand diverse information, personalized learning experiences could be tailor-built to match individual needs.
  • Healthcare: Gemini could assist with medical diagnosis and treatment by analyzing complex data and making custom recommendations.
  • Arts: Gemini could empower artists and creators to explore new forms of expression and push the boundaries of creativity.

Versions

Gemini will be available in three sizes:

  • Gemini Ultra: The largest and most powerful model for highly complex tasks.
  • Gemini Pro: Best performing model for a wide range of tasks.
  • Gemini Nano: The most efficient model for use on mobile devices.

Availability

Starting December 2023, Gemini will be integrated into various Google products and services including:

  • Bard: Google’s AI chatbot is already utilizing Gemini Pro for advanced reasoning and understanding.  Gemini Ultra will be added to Bard early next year to create a new experience called Bard Advanced.
  • Pixel: Pixel 8 Pro will be the first smartphone to run Gemini Nano, powering new features like Summarize in the Recorder app.
  • Search: Gemini will be used to provide more relevant and informative search results.
  • Ads: Gemini will optimize ad targeting for greater effectiveness.
  • Chrome: Gemini will enhance the browsing experience with personalized features.
  • Duet AI: Gemini will power Duet AI for more seamless and natural interactions.

The Future of AI

With its exceptional capabilities, Gemini could be a significant leap forward in AI development.  It just might have the potential to transform the way we live, work, and interact with the world.

Additional resources

The post Google Unveils Gemini AI appeared first on Cloud Academy.

]]>
0
Google Cloud Certifications: Which is Right for You and Your Team? https://cloudacademy.com/blog/google-cloud-certification/ https://cloudacademy.com/blog/google-cloud-certification/#comments Tue, 27 Jun 2023 13:00:00 +0000 https://cloudacademy.com/blog/?p=13861 In the rapidly evolving world of cloud computing, certifications can validate your skills and keep you ahead of the curve. Google Cloud Platform, a leading service in the industry, offers a suite of certifications designed to equip professionals with the knowledge and expertise needed to leverage Google’s cloud technology. This...

The post Google Cloud Certifications: Which is Right for You and Your Team? appeared first on Cloud Academy.

]]>
In the rapidly evolving world of cloud computing, certifications can validate your skills and keep you ahead of the curve.

Google Cloud Platform, a leading service in the industry, offers a suite of certifications designed to equip professionals with the knowledge and expertise needed to leverage Google’s cloud technology.

This blog post is aimed at businesses and professionals seeking to understand the various Google Cloud Certifications available today. We’ll outline each certification, its purpose, and who it’s best suited for, providing a comprehensive guide to navigate your cloud career.

But as a tech leader, how do you know which certifications to guide your team towards, and how do you assess their skill level/efficiency? Fill out the form and request a free demo! Our tech skill assessments let you save time by identifying the exact skills your team needs.

Fill out the form below and get a free demo!

Is a Google Cloud certification worth it?

A Google Cloud certification is definitely worth it for professionals seeking to demonstrate their knowledge and skills in cloud architecture and Google Cloud technologies. The certification can open up new job opportunities, enhance your credibility in the field, and potentially increase your salary. As more companies continue to adopt cloud technologies, the demand for certified professionals is growing. Having a Google Cloud certification can give you a competitive advantage in the job market.

Why is getting a Google Cloud certification important?

Getting a Google Cloud certification is important for several reasons:

Knowledge and skills: The process of preparing for the certification exams helps you gain a comprehensive understanding of Google Cloud technologies. This can greatly improve your ability to effectively design, develop, and manage cloud solutions.

Credibility: Having a certification from a well-known organization like Google can enhance your credibility as a cloud professional. It demonstrates to employers that you have the skills and knowledge necessary to perform tasks associated with Google Cloud technologies.

Career advancement: Many employers prefer or require certifications for certain job roles. By obtaining a Google Cloud certification, you can open up new job opportunities and potentially increase your earning potential.

Industry recognition: A Google Cloud certification is widely recognized in the IT industry. It can help you stand out from the crowd and get noticed by potential employers.

How long does it take to get a Google Cloud certification?

The time it takes to get a Google Cloud certification can vary depending on the specific certification and your level of experience and familiarity with Google Cloud technologies. The process involves self-study or formal training, followed by an exam. Preparation can take several weeks to several months, depending on the amount of time you can dedicate to studying.

Why should I get Google Cloud certified?

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure.

In 2022, research firm Gartner placed Google for the third consecutive year in the Leaders quadrant in its Magic Quadrant™ for Cloud AI Developer Services report. In the report, Gartner recommended the platform for its language, vision, and AutoML products for developers, offering a comprehensive platform that is ready for enterprise-level execution.

With more and more companies using GCP for at least some of their cloud requirements, there is an increased demand for Google Cloud Platform certification training. Google now has 10 certification exams that IT professionals can use to validate their GCP skills.

Bringing your team up to speed on GCP can take time you may not have. But with our Cloud Certification Fast-track program, you can crush your certification goals with direct support from our team of cloud experts. Custom training, visibility into progress, and end-to-end program management are just some of the features. Contact our team today to find out how the program works.

What are the Google Cloud certifications?

As of today, Google Cloud Platform offers 10 certifications. Let’s go through all of them and find the best one for your cloud career:

  1. Google Cloud Digital Leader;
  2. Google Associate Cloud Engineer;
  3. Google Professional Cloud Architect;
  4. Google Professional Data Engineer;
  5. Google Professional Cloud Developer;
  6. Google Professional Cloud Security Engineer;
  7. Google Professional Cloud DevOps Engineer;
  8. Google Professional Machine Learning Engineer;
  9. Google Professional Cloud Network Engineer;
  10. Google Professional Cloud Database Engineer.

Google Cloud Digital Leader

If you are wondering which Google Cloud certification is for beginners, here we are! This foundational-level certification is intended for people who want to prove they have an understanding of GCP fundamentals, cloud computing basics, and how Google cloud services can be used to achieve an organization’s goals.
Specifically, the Google Cloud Digital Leader exam covers these subject areas:

  • Digital transformation with Google Cloud
  • Innovating with data and Google Cloud
  • Infrastructure and application modernization
  • Google Cloud security and operations

The only prerequisite needed to enroll in the learning path and then get the Google Cloud Digital Leader certification is a general knowledge of computers and the internet. If you’re interested in getting certified, you should go through Cloud Academy’s Google Cloud Digital Leader Exam Preparation and then take the practice exam. But if you feel ready, you can go directly to Cloud Academy’s Google Cloud Digital Leader Exam.

Google Associate Cloud Engineer

This role-based Associate-level certification is intended for people who deploy applications, monitor operations, and manage enterprise solutions. Specifically, the Google Associate Cloud Engineer exam covers these subject areas:

  • Setting up a cloud solution environment
  • Planning and configuring a cloud solution
  • Deploying and implementing a cloud solution
  • Ensuring the successful operation of a cloud solution
  • Configuring access and security

The prerequisites needed to enroll in the learning path and then get the Google Associate Cloud Engineer certification are:

  • Basic understanding of Google Cloud Platform products and services
  • Basic understanding of cloud concepts, such as virtual machines, containers, and networking

If you’re interested in getting certified, go through Cloud Academy’s Google Associate Cloud Engineer Exam Preparation and then take the practice exam. But if you feel ready, you can go directly to Cloud Academy’s Google Associate Cloud Engineer Exam.

Google Professional Cloud Architect

This is one of Google’s professional certifications, so it is intended to prove you have advanced design and implementation skills. Professional exams expect you to have practical, hands-on knowledge related to your particular job. This certification is intended for IT professionals who know how to build solutions on Google Cloud Platform. Specifically, the Google Cloud Architect exam covers these six subjects:

  • Designing and planning a cloud solution architecture
  • Managing and provisioning a solution infrastructure
  • Designing for security and compliance
  • Analyzing and optimizing technical and business processes
  • Managing implementation
  • Ensuring solution and operations reliability

The prerequisite needed to enroll in the learning path and then get the Google Cloud Architect certification is general knowledge of IT concepts. If you are interested in getting certified and becoming a professional cloud architect, you should go through Cloud Academy’s Google Cloud Architect Exam Preparation and then take the practice exam. But if you feel ready, you can go directly to Cloud Academy’s Google Cloud Architect Exam.

Google Professional Data Engineer

This professional Google Cloud certification is intended for data professionals who enable data-driven decision-making by collecting, transforming, and publishing data. Specifically, the Google Cloud Data Engineer exam covers these areas:

  • Designing data processing systems
  • Building and operationalizing data processing systems
  • Operationalizing machine learning models
  • Ensuring solution quality

The prerequisite needed to enroll in the learning path and then get the Google Data Engineer certification is basic database knowledge. If you’re interested in getting certified and becoming a professional data engineer, go through Cloud Academy’s Google Cloud Data Engineer Exam Preparation and then take the practice exam. But if you feel ready, you can go directly to Cloud Academy’s Google Data Engineer Exam.

Google Professional Cloud Developer

If you are wondering which Google Cloud certification is best for developers, you are in the right place. This role-based professional certification is intended for software developers who build applications on Google Cloud Platform. Specifically, the Google Cloud Developer exam covers five areas:

  • Designing highly scalable, available, and reliable cloud-native applications
  • Building and testing applications
  • Deploying applications
  • Integrating Google Cloud services
  • Managing deployed applications

The prerequisites needed to enroll in the learning path and then get the Google Cloud Developer certification are:

  • General knowledge of IT architecture
  • Software development experience

If you are interested in getting certified and becoming a professional cloud developer, you should go through Cloud Academy’s Google Cloud Developer Exam Preparation and then take the practice exam. But if you feel ready, you can go directly to Cloud Academy’s Google Cloud Developer Exam.

Google Professional Cloud Security Engineer

This role-based professional certification is intended for people who have mastered cloud security, including identity and access management, data protection, network security defenses, and more. Specifically, the Google Cloud Security Engineer exam covers these areas:

  • Configuring access within a cloud solution environment
  • Configuring network security
  • Ensuring data protection
  • Managing operations in a cloud solution environment
  • Ensuring compliance

The prerequisite needed to enroll in the learning path and then get the Google Cloud Security Engineer certification is a basic understanding of cloud concepts such as virtual machines, containers, and networking. If you are interested in getting certified and becoming a professional cloud security engineer, you should go through Cloud Academy’s Google Cloud Security Engineer Exam Preparation and then take the practice exam. But if you feel ready, you can go directly to Cloud Academy’s Google Cloud Security Engineer Exam.

If you’re a tech manager, you know best that security is one of the most significant issues holding back cloud adoption. Cloud Academy’s training library focuses deeply on IT Security, allowing your team to stay up to date with new security breaches and ways to resolve them.
If you’re interested in understanding how to keep your cloud environment secure with Cloud Academy, contact us and request a free demo!

Google Professional Cloud DevOps Engineer

This role-based professional certification is intended for people who are responsible for efficient development operations in an organization and can balance service reliability and delivery speed. Specifically, the Google Cloud DevOps Engineer exam covers these areas:

  • Applying site reliability engineering principles to a service
  • Building and implementing CI/CD pipelines for a service
  • Implementing service monitoring strategies
  • Optimizing service performance
  • Managing service incidents

The prerequisites needed to enroll in the learning path and then get the Google Cloud DevOps Engineer certification are:

  • Basic understanding of cloud concepts (virtual machines, containers, networking, etc.)
  • General knowledge of IT architecture
  • Familiarity with DevOps practices and principles

If you are interested in getting certified and becoming a professional DevOps engineer, you should go through Cloud Academy’s Google Cloud DevOps Engineer Exam Preparation.

Google Professional Machine Learning Engineer

This role-based professional certification is intended for people who design, build and deploy machine learning models to solve business challenges using Google Cloud technologies. Specifically, the Google Cloud Machine Learning Engineer exam covers six subject areas:

  • Framing machine learning problems
  • Architecting machine learning solutions
  • Designing data preparation and processing systems
  • Developing machine learning models
  • Automating and orchestrating machine learning pipelines
  • Monitoring, optimizing, and maintaining machine learning solutions

The prerequisites needed to enroll in the learning path and then get the Google Machine Learning Engineer certification are:

  • Basic understanding of cloud concepts
  • Experience writing Python code

If you are interested in getting certified and becoming a Google Cloud professional ML engineer, you should check out Cloud Academy’s Google Cloud Machine Learning Engineer Exam Preparation and then take the practice exam. But if you feel ready, you can go directly to Cloud Academy’s Google Cloud Machine Learning Engineer Exam.

Google Professional Cloud Network Engineer

This role-based professional certification is intended for people who implement and manage network architectures in GCP. Specifically, the Google Cloud Network Engineer exam covers these subject areas:

  • Designing, planning, and prototyping a Google Cloud network
  • Implementing Virtual Private Cloud (VPC) instances
  • Configuring network services
  • Implementing hybrid interconnectivity
  • Managing, monitoring, and optimizing network operations

The prerequisite needed to enroll in the learning path and then get the Google Cloud Network Engineer certification is a basic understanding of cloud concepts such as virtual machines, containers, and networking. If you are interested in getting certified and becoming a professional cloud network engineer, you should go through Cloud Academy’s Google Cloud Network Engineer Exam Preparation and then take the practice exam. But if you feel ready, you can go directly to Cloud Academy’s Google Cloud Network Engineer Exam.

Google Professional Cloud Database Engineer

This role-based professional certification is intended for people who want to learn to design, create, manage, and troubleshoot Google Cloud databases used by applications to store and retrieve data. Specifically, the Google Cloud Database Engineer exam covers four subject areas:

  • Design scalable and highly available cloud database solutions
  • Manage a solution that can span multiple database solutions
  • Migrate data solutions
  • Deploy scalable and highly available databases in Google Cloud

If you’re interested in getting certified, you should check out the Google Cloud Database Engineer certification.

What are some differences between these roles?

As you’ve seen, these Google Cloud Platform certifications are very role-specific, helping demonstrate a high level of ability in each area. With some of these job roles, the differences are more obvious, such as between a developer (someone who creates applications) and a security engineer (someone who secures data and infrastructure). Other ones need a little bit more explanation.

Cloud Architect vs. Data Engineer

One of the biggest reasons organizations choose Google Cloud Platform is for its robust data services, such as BigQuery, Cloud Dataflow, and Vertex AI. How these services are used helps explain the different roles within Google Cloud certification. While a Cloud Architect knows how to design solutions using the above services and how to speak to other departmental stakeholders about big picture ideas, a Data Engineer has a much more in-depth understanding of how to build, maintain, and troubleshoot data processing systems on a daily basis. For example, a Data Engineer needs to have experience with cutting-edge concepts like neural networks and streaming analytics, and how to ingest data and prepare it for others, such as data scientists, to use.

Developer vs. DevOps Engineer

A Google Cloud Platform Developer understands how to create applications that are optimized for the deployment of Google’s infrastructure while also being comfortable with the development tools available on GCP. In contrast, a DevOps Engineer is responsible for designing and implementing the development infrastructure itself, particularly to automate the software development lifecycle. For example, a DevOps Engineer would build continuous integration / continuous delivery (CI/CD) pipelines.

How can I prepare for a Google Cloud certification exam?

Google Cloud certification exams are known for their difficulty and breadth of topics, so the importance of preparation and training beforehand cannot be overstated. Surprisingly, Google doesn’t provide a score at the end of an exam. Instead, it simply issues a “Pass” or “Fail” result, which doesn’t provide much help in terms of knowing what to study if you fail. This is why it’s important to thoroughly prepare for the exam, and this is where certification training with Cloud Academy can help.

Here are 5 steps to take before taking a Google Cloud certification exam:

  1. Take the relevant learning path on Cloud Academy. We offer many learning paths covering all you need to know in each Google Cloud Platform certification area. These learning paths include courses, hands-on labs, and exams for each certification.
  2. Get hands-on practice on GCP. Hands-on Labs and Lab Challenges provide easy ways for you to quickly get practical experience in real cloud environments without deploying your own infrastructure and running up extra costs.
  3. Review the outline in Google’s official exam guide and check for any knowledge gaps.
  4. The Cloud Architect exam includes questions about lengthy case studies. You can find these case studies in the exam guides. Make sure you read them thoroughly before writing the exam.
  5. Take one of Google’s own practice exams (such as the “sample questions” link on the Cloud Architect home page).

Each exam is two hours long (except for Cloud Digital Leader, which is 90 minutes long) and must be taken either as an online proctored exam (OLP) or at a Kryterion testing center. Kryterion has over 1,000 testing centers in 120 countries, so you should be able to find one in your area.

What’s the next step after taking the exam?

As I mentioned earlier, the result shown after an exam is simply “Pass” or “Fail.” If you don’t pass, you’re allowed to take the exam again after 14 days. If you fail on the second attempt, then you have to wait 60 days before the next attempt.

If you pass the exam, you’ll receive an online certification badge that can be displayed on sites like LinkedIn, and more importantly, a Google Cloud certification hoodie.

Once you’re certified, you’ll want to stay up to date on Google Cloud Platform as it continues to evolve. You can learn everything about Google Cloud in our Google Cloud content library.

Good luck and happy learning!

FAQs

Which Google Cloud certification is best?

The ‘best’ Google Cloud certification largely depends on your career goals, current skills, and future plans. For those starting their journey in cloud technologies, the Associate Cloud Engineer certification can be a good starting point. If you have a significant amount of experience with Google Cloud technologies and are interested in designing cloud architecture, the Professional Cloud Architect certification is a great choice​. For professionals focused on managing Google Workspace within an organization, the Google Professional Workspace Administrator certification would be best​​.

Which is the easiest Google Cloud certification to get?

The Associate Cloud Engineer certification is often considered the ‘easiest’ Google Cloud certification to get, mainly because it is geared towards individuals who are just starting their journey in cloud technologies. However, ‘easiest’ can be subjective and largely depends on your background, experience, and familiarity with Google Cloud technologies.

Which certification is better, Google Cloud or AWS?

The choice between Google Cloud and AWS certifications depends on your career goals, the specific technologies you work with, and the needs of your employer or prospective employers. Both certifications have their merits and are widely recognized in the industry. AWS certifications might be a better choice if you’re working primarily with AWS technologies or aiming to work in an AWS-dominant environment. On the other hand, Google Cloud certifications would be more beneficial if you’re focusing on Google Cloud technologies or seeking opportunities with companies that heavily use Google Cloud. Both platforms are popular and have large market shares, so either certification can be beneficial for your career.

How much does a Google Cloud certification cost?

The cost of Google Cloud certifications varies depending on the specific certification exam. Generally, you can expect to pay a registration fee for the exam.

How long does it take to study for a Google Cloud certification?

The time it takes to study for a Google Cloud certification can vary widely depending on the specific certification and your level of experience with Google Cloud technologies. Generally, you should expect to spend several weeks to a few months preparing for the exam. It’s important to take enough time to thoroughly understand the exam content and feel confident in your ability to pass the exam.

Want to understand how Cloud Academy can help your team get GCP certified and always stay up to date?

With our Cloud Certification Fast-track program, you can crush your certification goals with direct support from our team of cloud experts. Custom training, visibility into progress, and end-to-end program management are just some of the features. Fill out the form to get a free demo and find out how the program works.

Fill out the form below and get a free demo!

The post Google Cloud Certifications: Which is Right for You and Your Team? appeared first on Cloud Academy.

]]>
6
Google Cloud Run: Automatically Scale Stateless Containers https://cloudacademy.com/blog/google-cloud-run-automatically-scale-your-stateless-containers/ https://cloudacademy.com/blog/google-cloud-run-automatically-scale-your-stateless-containers/#respond Tue, 14 Mar 2023 13:00:00 +0000 https://cloudacademy.com/?p=53500 Google Cloud Run is a fully managed, serverless platform that enables developers to run stateless containers with automatic scaling and pay-as-you-go pricing.

The post Google Cloud Run: Automatically Scale Stateless Containers appeared first on Cloud Academy.

]]>
Modern software development relies heavily on containers as a portable, standardized method of packaging apps and their dependencies. A serverless platform called Google Cloud Run uses the power of containers to give developers a quick and effective way to install and operate their apps at scale.

Cloud Run offers a practical way to execute containerized apps without worrying about the underlying infrastructure, scaling up or down dynamically based on traffic. 

The article discusses the significance of Google Cloud Run, Cloud Run Services & Job, integration, and more.

Here’s everything we’ll cover:

What is Google Cloud Run?

Google Cloud Run is a platform that is fully managed and serverless, which means that it enables developers to run applications that are containerized without having to worry about the infrastructure that supports them. The capacity to grow automatically based on traffic and demand will enable developers to create and deploy stateless containers to the cloud.

Cloud Run supports any programming language that can be packaged as a Docker container. As a result, programmers may quickly deploy their apps to Cloud Run using the language or framework of their choice.

Cloud Run offers pay-as-you-go pricing, so programmers only pay for the resources they use and only while their application is active. The platform can dynamically scale up or down to fit demand, ensuring the service is always available and responsive. It is the perfect option for applications with unexpected traffic patterns.

The well-known open-source project Knative, which offers a standardized method to create, deploy, and manage serverless workloads on Kubernetes, is the foundation upon which Cloud Run is based. As a result, developers can benefit from Kubernetes’ scalability and flexibility without having to take care of the underlying infrastructure themselves.

Google Cloud Run Services and Jobs

The technology dynamically scales a container up or down based on incoming traffic when a developer delivers one to Cloud Run. This means that the application remains available and responsive at all times, even when there is a high volume of traffic. 

Your code can continue running as a service or as a job on Google Cloud Run, and the same infrastructure allows them to integrate with the other Google Cloud services.

  • Cloud Run Services: Stateless apps known as Cloud Run Services can be quickly deployed to the cloud because they are packaged in a Docker container. 
  • Cloud Run Jobs: Cloud Run Jobs are temporary tasks that can be carried out as needed without requiring underlying infrastructure management. 

Google Cloud Run Services

A stateless application that is packaged in a Docker container and can be quickly deployed to the cloud is referred to as a service in Google Cloud Run. 

Developers can use any programming language and framework when creating their services as long as the application can be put in a container. After the container is constructed, it may be quickly deployed to Cloud Run, where the platform will handle the rest.

Cloud Run services are made to grow dynamically according to demand, guaranteeing the application’s responsiveness and availability. The platform can scale up from zero to handle requests and back down to zero when there are no new requests, saving the user money. To increase the application’s security, Cloud Run services can also be set up to operate on a private network.

Google Cloud Run Services Features

  • Unique HTTPS Endpoints: Every service has a unique HTTPS endpoint on its specific subdomain under the *.run.app domain, which the platform makes available when a programmer deploys a Cloud Run Service. In addition to supporting WebSockets, HTTP/2 (end-to-end), and gRPC (end-to-end), Cloud Run controls TLS for you.
  • Built-In Traffic Management: Cloud Run Services has facilities for managing traffic distribution among several service instances. To implement a progressive rollout, you can send incoming traffic to the recent revision, return to a prior iteration, or split traffic across many modifications simultaneously. This function guarantees that resources are used effectively and that users get a consistent experience.
  • Fast Request-Based Auto-Scaling: Cloud Run Services dynamically scale up or down according to the incoming requests. This indicates that the service can withstand high traffic loads and remain accessible and helpful to users.
  • Public And Private Services: Both private and public services can be launched using cloud-run services. Public services can be accessed from the internet, while private services can only be accessed within a VPC network. 

Google Cloud Run Services Use Cases

Following are some of the most popular use cases of Google Cloud Run. 

  • Microservices and APIs: Cloud Run Services are ideal for developing and deploying microservices and APIs that interact using HTTP or gRPC, a GraphQL API, or a REST API. This enables more efficient utilization of resources and ensures that the service is both accessible and user-friendly.
  • Websites and web applications: Web applications and sites that were created with various programming languages and frameworks can be hosted and scaled using Cloud Run Services, while you can use your preferred stack to make your web application, connect to your SQL database, and produce dynamic HTML pages.
  • Streaming data processing: Processing streaming data, such as data input, transformation, and analysis, can be done using Cloud Run Services, such as updates from Eventarc and Pub/Sub push subscriptions can send notifications to Cloud Run services.

Google Cloud Run Jobs

Google Cloud Run provides running jobs in addition to running services. A short-lived activity that can be carried out instantly in Cloud Run without the need to manage the underlying infrastructure is known as a job. 

Using event-driven processes like a Cloud Pub/Sub topic or a Cloud Storage bucket, jobs can be started manually or automatically. The container shuts down automatically after finishing the task, saving the user money.

Job with one task

One container instance can be started by a job to operate your code; this is a typical technique to run a tool or program. But you can also simultaneously launch several identical, distinct container instances or an array job.

Jobs with multiple tasks

Array jobs provide a quicker approach to handling jobs that can be divided into numerous independent tasks.

For Instance, Analyzing 1,000 photographs sequentially to resize and trim them will take longer than analyzing them all simultaneously using several container instances.

Google Cloud Run Jobs use cases

Following are some significant use cases of Google Cloud Run.

  • Script or Tool: In a serverless environment, developers can utilize Cloud Run to run temporary scripts or tools, such as batch data processing, cron jobs, or custom scripts, while to do database migrations or other maintenance operations, you can use a hand.
  • Array Job: Array Job enables developers to run several jobs concurrently, each with a separate set of parameters, to process enormous amounts of data or carry out repetitive operations, such as processing all items in a Cloud Based bucket in a highly parallelized manner.
  • Scheduled Job: You can utilize Cloud Run to execute scheduled tasks at predetermined intervals. It lets you regularly generate and transmit invoices or store an XML file containing the outcomes of a database query and upload it every few hours. This can help automate recurring actions that must be carried out regularly.
  • Machine-Learning Interference: Cloud Run Machine learning models can be run using jobs in a serverless setting. Developers can deploy their models as tasks and package them in containers, enabling on-demand inference without needing to manage infrastructure.

How to deploy to Google Cloud Run

Deploying a single container in the Google Cloud Run is not that daunting, just follow the below-mentioned steps to deploy a single container. 

Step 1: Log in to Google Cloud Run using your registered account information. 

Step 2: Click on the Create Service option to view the full Create Service Form.

Using the form:

  • Choose Deploy One Pevision of an existent container image.
  • Click Test with a sample container
  • Choose the location where you wish your services to be accessible from the Area pulldown option.
  • Choose the Accept all traffic choice and Permit unauthenticated invocations in the corresponding table on the Configure how well this application is activated form.
  • To deploy the test container image to Cloud Run, click Create and then wait for the deployment to complete.

You have successfully deployed a container that answers arriving HTTP requests to Cloud Run. When there are more requests than your container can manage, Cloud Run automatically scales it out; when there are fewer demands, it scales back in. 

Google Cloud Run integrations 

Google Cloud Run integrates with the more extensive Cloud infrastructure, allowing you to create fully functional apps, following are some of the significant integrations of Cloud Run.

Data storage

You save and access data from your apps using Cloud Run, which connects with several Google Cloud storage services, including Cloud Storage, Cloud SQL, and Firestore.

Continuous delivery

Thanks to Cloud Run’s integration with Cloud Build and Container Registry, your apps will have a complete end-to-end continuous delivery pipeline.

Background tasks

Cloud Run enables the use of Cloud Tasks or Pub/Sub to conduct background tasks, enabling asynchronous processing or task scheduling at a later time.

Private networking

Cloud Run enables you to install your services on a private network as an extra security measure.

Logging and error reporting

By integrating with Cloud Logging and Cloud Error Reporting, Cloud Run offers a central area to monitor and debug the logs and faults of your application.

Google Cloud APIs

You may leverage pre-made machine learning models in your apps by integrating Cloud Run with several Google Cloud APIs, including Cloud Vision and Cloud Translate.

Service identity

Using Google identities and groups to control access to your services and resources is made possible by Cloud Run’s integration with Cloud IAM.

Google Cloud Run pricing 

You will only be billed by Cloud Run for the services you utilize, adjusted to the closest 100 milliseconds. Be aware that each resource has a free tier, and the sum of the assets listed in the price table will represent your final Cloud Run bill.

Several applications may share container instances’ allotted CPU and RAM when concurrency is set to a value greater than one demand at a time. Resources within Google Cloud’s exact location are free for network egress (for example, traffic from one Cloud Run service to another Cloud Run service).

Conclusion

Stateless containers can be run serverless on the robust platform Google Cloud Run in a highly scalable and safe setting. Developers can quickly launch their apps with Cloud Run without handling the infrastructure’s maintenance. 

The platform scales up or down automatically depending on the traffic volume, resulting in cost savings and effective resource use. Cloud Run is also cost-effective for running containerized apps at scale due to its pay-per-use pricing structure.

Cloud Academy offers a comprehensive Google Cloud Platform Training Library, where you can learn about GCP certifications and labs. If you’re interested in learning about Google Cloud Run, you can do hands-on using Build and Deploy a Container Application with Google Cloud Run lab. This lab requires a basic understanding of Docker, but it is not compulsory.

Frequently Asked Questions (FAQs)

What is Google Cloud Run vs. App Engine?

Cloud Run is a serverless container platform that is fully managed by Google. At the same time, Google App Engine is a higher-level platform-as-a-service (PaaS) that abstracts the underlying infrastructure and focuses on ease of use for web application development.

What is the difference between Google Cloud Run and Google Cloud Functions?

With Google Cloud Functions, you can run code in response to events when Google Cloud services triggered them. Google Cloud Run is built for stateless containers that can be installed and run fast in response to incoming requests.

What is the AWS equivalent of Google Cloud Run?

AWS Fargate, a fully managed container platform that enables you to install and run containers without addressing the underlying infrastructure, is the AWS equivalent of Google Cloud Run.

The post Google Cloud Run: Automatically Scale Stateless Containers appeared first on Cloud Academy.

]]>
0
Overview of Google Cloud IAM: Roles, Best Practices, and Conditions https://cloudacademy.com/blog/overview-of-google-cloud-iam-roles-best-practices-and-conditions/ https://cloudacademy.com/blog/overview-of-google-cloud-iam-roles-best-practices-and-conditions/#respond Wed, 17 Aug 2022 01:00:00 +0000 https://cloudacademy.com/?p=50583 What is Google Cloud IAM? Google Cloud Identity and Access Management (IAM) allows users to access specific cloud resources while preventing unauthorized access to any resource in Google Cloud Platform (GCP). One of the main security tenets of cloud services in general, which states that no one should be granted...

The post Overview of Google Cloud IAM: Roles, Best Practices, and Conditions appeared first on Cloud Academy.

]]>
What is Google Cloud IAM?

Google Cloud Identity and Access Management (IAM) allows users to access specific cloud resources while preventing unauthorized access to any resource in Google Cloud Platform (GCP). One of the main security tenets of cloud services in general, which states that no one should be granted additional permissions without the principal user’s consent, is a good analogy to the function performed by GCP IAM. To preserve the integrity and data security of the businesses’ technology environments, which are expanding to include several Google cloud resources, authorized access must be made available, but only to the right users and in a “least privilege” manner.

GCP Identity and Access Management has a stellar reputation for its security capabilities. The small to medium and large enterprises who are adapting Google Cloud Platform for cloud services and its resources should have a clear understanding of how to best leverage Google Cloud IAM, since understanding this function or service will help those businesses ensure the security of their workload and resources in the future. 

With Google Cloud Identity and Access Management, you have the power to limit who has access to particular resources by defining who (identity) is willing to access what (role) for a specific resource in Google Cloud infrastructure. It entails that you will be specifying the identification of everyone who will have access to or authority over a certain cloud resource. Google offers a variety of cloud resources, including GKE Clusters, Compute Engine VM (virtual machine) instances, and more. Not only may you limit access to them, but also to the projects you are working on and the folders that contain vital information about your resources. 

How Does Google Cloud IAM Work?

As we described in the previous section Google Cloud IAM is responsible to define who (identity) is willing to access what (role) for which cloud resource. Some examples of resources can be: instances of VMs running on Compute Engine, clusters of Google Kubernetes Engine (GKE), and buckets of Cloud Storage. Resources include the projects, folders, and associations you use to arrange your resources.

In Google Cloud IAM, the end user isn’t permitted to access a resource directly. Instead, GCP IAM roles are used to organize permissions, and authorized principals are given access to roles. Note: Principals were frequently referred to as members by Google Cloud IAM in the past and this name is still used by some APIs.

Google Cloud IAM Principals and Roles
Google Cloud IAM dashboard showing Principals and Roles

The roles assigned to principals are specified and enforced by an allow policy, also known as a Google IAM policy. A resource is associated with each allowed policy and GCP IAM evaluates the resource’s allow policy when an authenticated principal tries to access it to see if the action is allowed.

Google Cloud IAM Roles

The components of GCP IAM hold the collection of permissions in the roles that need to be monitored. However, before doing that, you need to be familiar with the various Google Cloud IAM role types that are:

1. Basic Roles

The fundamental Google IAM roles are editor, viewer, and owner. Before consumers were made aware of GCP IAM, these roles were in use. Since all of these jobs are interdependent on one another, the Editor position’s permissions make up the Owner role. Additionally, the Viewer role’s permissions make up the Editor role. The Cloud Console, the GCP tool, or Google Cloud API can all be used to apply the basic roles to a service or project.

2. Predefined Roles

Granular access to the particular services that Google Cloud manages is provided via predefined roles. It permits access to the Google Cloud resources while taking steps to restrict unauthorized access to all other resources. Google keeps track of predefined roles and takes the necessary steps to update permissions. Typically, it occurs whenever Google introduces new products or services. 

3. Custom Roles

This particular job lets you provide users granular access to the user-specific permission list. In addition to established roles, Google Cloud IAM gives users the freedom to create their own IAM custom roles. You may add a single or many particular permissions to GCP IAM custom roles. You can establish and maintain the custom roles and then allocate them to the users who are crucial to your company.

GCP IAM Service Accounts

A service account is a unique type of account that is utilized by a VM instance or an application rather than a user. Using service accounts in Google Cloud infrastructure, workloads create allowable Google Cloud API calls which can be either: Google Workspace, Google Identity users via domain-based delegation, or the service account itself.

A service account can be specified as service-account-name@project-id.iam.gserviceaccount.com. Each service account uses two sets of private/public RSA (Rivest, Shamir, Adleman) key pairs for Google authentication.  

Types of Service Accounts

Google Cloud IAM Service Accounts

There are three prominent service account categories:

User-managed: Service accounts created and controlled by GCP project administrators.

Default: The administrator of a project may control service accounts that Google creates. They are recommended to be associated with a resource of a certain type as the default service account. The Compute Engine Default Service Account, for example, which by default has the “Editor” basic role on a project submitted to it, illustrates how these default service accounts occasionally have access that is quite permissive and why understanding how these methods operate is crucial.

Google-managed: Google-managed service accounts are granted specific roles by default and these accounts are created when relevant services need to access Google cloud resources in a specific project. This account can be removed by a project administrator at any time.

Google Cloud IAM Best Practices

The following are some recommendations for Google IAM best practices:

  • Identity and Access Management focuses on who by allowing people and groups to be authorized to act on particular resources based on permissions.
  • Use Google Cloud Platform IAM Roles with the narrowest scope and predefined over primitive roles to provide users granular access and manage what they may access.
  • By ensuring that users have just the rights they truly require, you may manage access to resources and uphold the principle of least privilege.
  • Create groups for users that share responsibilities, provide the groups IAM roles rather than individual users, and assign groups and service accounts with accountability.
  • Rotate service account keys for user-controlled service accounts by utilizing service accounts for server-to-server interactions.
  • Know that a child resource cannot limit access granted to its parent under GCP IAM Policy Inheritance.
  • To gain centralized and automated management over the organization’s cloud resources, define an organization policy to use with the organization policy service.
  • An organization’s policies concentrate on what, giving the option to impose limitations on particular resources to decide how they can be set up and used.
  • A resource’s policies are automatically inherited by all of its offspring.
  • By affixing a Google Cloud IAM policy to the top-level organization node, a fundamental set of restrictions can be established that apply to every element in the object hierarchy.
  • Child nodes can be configured with custom organization policies that replace or combine with the inherited policy.
  • While Cloud Identity regulates how employees access Google services, the identity platform continues to be the source of truth.

Google Cloud IAM Conditions

For Google Cloud resources, conditional, attribute-based access control can be specified and implemented using GCP IAM Conditions.

You can decide to only grant access to principals using Google Cloud IAM Conditions if certain requirements are satisfied. For instance, you may give users temporary access so they can fix a production problem, or you could only grant access to staff members who request it from your corporate office.

The allow policy of a resource’s role bindings specifies conditions. When a condition is present, access is only granted if the expression that makes up the condition evaluates to true. Each condition expression’s logic statements list one or more properties to be checked.

Google Cloud IAM Conditions cannot be applied when allocating basic roles like the Editor job or the Owner role (roles/editor). Additionally, you are unable to apply Google IAM conditions when granting roles to all users or all users who have authenticated.

Accept Conditional Policies

As a part of permitted policies, role bindings have the following structure:

"bindings": [
  {
    "position": ...,
    "components": ...,
    "state": ...
  },
  ...
]

There may be 0 or 1 condition per role binding; the condition object is optional. In the absence of a condition object in a role binding, the principals always have the designated role on the resource.

Role bindings with GCP IAM conditions do not take precedence over those without conditions. When a principal is assigned a role and the role binding is not subject to a condition, the principal always occupies the assigned role. No matter if the principle is included in a constrained binding for the same role, the binding remains conditional.

The condition object’s structure is as follows:

"state/condition": {
    "title": ...,
    "description": ...,
    "expression": ...
}

The description of the ailment is not necessary, but its title is. The sole purpose of the title and description fields is to aid in identifying and describing the illness.

After utilizing a portion of Common Expression Language (CEL), an attribute-based logic expression is defined. Multiple statements may be used in the condition expression; each statement evaluates a different characteristic. The CEL language standard specifies that statements are combined using logical operators.

Wrapping Up

Gcloud IAM enables you to restrict access to other resources and allows you to offer granular access to particular Google Cloud resources. You may adhere to the security concept of least privilege using IAM, which states no one should be granted further permissions than they require.

Cloud administrators and developers have a foundation for building an effective and secure developer infrastructure thanks to the delegation of Google Cloud IAM roles across an enterprise. Setting up a secure hierarchy across projects and organizations by segmenting responsibilities into basic, predefined, and custom categories and outlining each category’s restrictions can be quite effective.

The post Overview of Google Cloud IAM: Roles, Best Practices, and Conditions appeared first on Cloud Academy.

]]>
0
It’s 10:00 AM: Do You Know Where Your Team’s Tech Skills Are? https://cloudacademy.com/blog/do-you-know-where-your-teams-tech-skills-are/ https://cloudacademy.com/blog/do-you-know-where-your-teams-tech-skills-are/#respond Mon, 17 May 2021 05:00:28 +0000 https://cloudacademy.com/?p=46318 One of the most challenging parts of managing a team in today’s digital world is that technology advances faster than people’s skills can — leaving you guessing if you have the resources to complete projects on time and on budget. It’s that sinking feeling in your gut of fear and...

The post It’s 10:00 AM: Do You Know Where Your Team’s Tech Skills Are? appeared first on Cloud Academy.

]]>
One of the most challenging parts of managing a team in today’s digital world is that technology advances faster than people’s skills can — leaving you guessing if you have the resources to complete projects on time and on budget.

It’s that sinking feeling in your gut of fear and worry about another missed deadline. But what if the project is mission critical? You must find a way to achieve your goals even if you don’t know if you have the talent on your team right now. And you need to use your budget wisely to generate results, fast.

So, you consider your options. Hiring cycles for specialty tech talent are both slow and expensive, and that’s before onboarding even begins. You can start looking for the right people with the right mix of interpersonal and technical expertise, but that timeline is really out of your control — and the clock is ticking.

Taking a step back… could your team execute if they were given the proper tools to upskill quickly?

Putting a stop to the guesswork

The first step in forecasting your skill requirements is to fully understand where your team sits versus where it needs to be to execute against current and future business objectives. This knowledge will give you the confidence to make the right decisions with regard to investing internally or looking for talent on the open market. But where to begin?

Cloud Academy provides the answer to the question, “What are my team’s current tech skills?” with our accurate and objective tech skill assessment tool. Use our out-of-the-box assessments or create a custom version to test and validate on topics including:

Sounds good. What next?

In addition to assessing your team, Cloud Academy for Business helps you recommend and assign the training plans you need to take individuals’ skills to the next level. We are nothing like other e-learning platforms that outsource their content, don’t curate it properly, and leave skills development to the whim of employees’ motivation.

Cloud Academy is a software company built on the premise that an investment in the skills growth of your workforce should have a direct, positive, and measurable effect on operational goals. And that without accountability via regular evaluation, training programs are bound to fail.

With our platform, managers and administrators can set expectations around timelines that promote programmatic upskilling, establishing defined job roles with career progression tracks, and better execution on business objectives — all in a predictable and scalable way — whether you have two employees or tens of thousands. This method of shared accountability promotes team growth, improved retention, and a culture of success.

Our in-house content team consists of tech experts around the globe who continually create new and refresh existing educational assets. This means the learning paths, hands-on labs in live environments, quizzes, exams, and certifications that you assign your team are always up to date. Not to mention, our dedicated customer success team works with you to help you identify the right priorities and the best path forward for getting your squad to where it needs to be.

Try Cloud Academy’s Enterprise Plan for 2 weeks free

For registrations from May 17 until May 31, 2021, we’re offering your business unlimited access to the Cloud Academy platform for 14 days at no cost. Start by assessing your team’s skills, and explore the upskilling potential enabled by our library of content. We’ll be there to help you every step of the way.

Don’t miss out! Click here to get started.

Enterprise-Free-Trial-Skill-Assessment

The post It’s 10:00 AM: Do You Know Where Your Team’s Tech Skills Are? appeared first on Cloud Academy.

]]>
0
How Do We Handle Google Subscription Notifications at Cloud Academy? https://cloudacademy.com/blog/how-do-we-handle-google-subscription-notifications-at-cloud-academy/ https://cloudacademy.com/blog/how-do-we-handle-google-subscription-notifications-at-cloud-academy/#respond Thu, 12 Nov 2020 01:18:56 +0000 https://cloudacademy.com/?p=44606 If you have ever used the Cloud Academy Android application, you should be aware that you have the option to subscribe to our platform by paying through Google Pay. We offered this option because most mobile users nowadays prefer paying with integrated payment systems offered by both Android and iOS....

The post How Do We Handle Google Subscription Notifications at Cloud Academy? appeared first on Cloud Academy.

]]>
If you have ever used the Cloud Academy Android application, you should be aware that you have the option to subscribe to our platform by paying through Google Pay. We offered this option because most mobile users nowadays prefer paying with integrated payment systems offered by both Android and iOS.

The Challenge

Because of this integration in the Cloud Academy mobile application, we needed to think about implementation to support Google Subscriptions.

We reached the Google doc and learned that for every subscription action, such as a subscription creation or a renewal, we would receive a notification generated by the Google Pay service. Google sends all notifications to a Google Cloud Pub/Sub topic which we needed to specify.

So the challenge was: How can we handle loads of subscription notifications in an efficient, reliable, and fault-tolerant way?

Google Cloud Pub/Sub

Luckily for us, all the Google Pay notifications are sent to a Google Cloud Pub/Sub topic that we can access to reach them.

Before going through the solution that we implemented, I want to give you a high-level overview of what Pub/Sub is. Google Cloud Pub/Sub is a messaging-oriented middleware that lets you decouple services that create messages (the Publishers) from services that process messages (the Subscribers).

The core components in a Pub/Sub solution are the following:

  • Topics: An entry point for the publishers that want to publish messages.
  • Subscriptions: An entity associated with a specific topic and that is used by the subscribers to receive messages published on the topic the subscription is associated with.

There are two kinds of subscriptions.

Pull Subscriptions

A pull subscription is used when the subscriber wants to manually get all the messages that have been published on the topic the subscription is associated with. In this case, the subscriber makes a pull call, receives the messages, and then sends an ACK (acknowledge) to let Pub/Sub know that everything has completed correctly.

Subscription Pull Diagram

Push Subscriptions

A push subscription is used when the subscriber wants to be automatically triggered when a new message is published on the topic the subscription is associated with. In this case, Pub/Sub directly sends each new message to the subscriber and it needs to send an ACK (acknowledge).

Subscription Push Diagram

The Solution

To build a solution that meets the requirements described below, we set up a single Pub/Sub Topic and then we created and attached a Push Subscription to it. We set Google Pay to send Google notifications to our Pub/Sub topic, which completes the Publisher part of our infrastructure.

That was a good start, but we still needed to set up the Subscriber part of the infrastructure. For this purpose, we created a webhook into our software infrastructure and set it as the target of the push subscription. This way, all the messages published to the topic are captured and then sent to our internal webhook.

Security and Reliability Issues

Because our internal software needs to authenticate any request made by external entities, we needed to find a way to authenticate all the requests made to the webhook. With this approach, we could be sure of which requests were allowed (all those made from Google Pub/Sub) and which were not allowed (such as malicious requests). Luckily, we were able to set up authentication on the Pub/Sub Subscription. So, each message sent to the webhook is signed with a JWT that is inserted in the header, and we are able to verify it with an authentication middleware we have in our software.

Another very important point to think about was the reliability of our solution. Suppose that for an unexpected error, when Pub/Sub sends the message to the webhook, it is not able to process the message and raises an error (5xx error code). We couldn’t lose messages for any reason. Google Pub/Sub helped us again. Luckily, we were able to set up the Push Subscription to not delete messages until an ACK was sent, and to retain them for a 72-hour period. During this period, Google would have tried to send the message again until the process succeeded.

The post How Do We Handle Google Subscription Notifications at Cloud Academy? appeared first on Cloud Academy.

]]>
0
New Content: AWS Data Analytics – Specialty Certification, Azure AI-900 Certification, Plus New Learning Paths, Courses, Labs, and More https://cloudacademy.com/blog/new-content-aws-data-analytics-azure-ai900-certifications/ https://cloudacademy.com/blog/new-content-aws-data-analytics-azure-ai900-certifications/#respond Wed, 14 Oct 2020 02:43:22 +0000 https://cloudacademy.com/?p=44425 This month our Content Team released two big certification Learning Paths: the AWS Certified Data Analytics – Speciality, and the Azure AI Fundamentals AI-900. In total, we released four new Learning Paths, 16 courses, 24 assessments, and 11 labs.  New content on Cloud Academy At any time, you can find...

The post New Content: AWS Data Analytics – Specialty Certification, Azure AI-900 Certification, Plus New Learning Paths, Courses, Labs, and More appeared first on Cloud Academy.

]]>
This month our Content Team released two big certification Learning Paths: the AWS Certified Data Analytics – Speciality, and the Azure AI Fundamentals AI-900. In total, we released four new Learning Paths, 16 courses, 24 assessments, and 11 labs. 

New content on Cloud Academy

At any time, you can find all of our new releases by going to our Training Library and finding the section titled “New this month in our library.” You can also keep track of what new training is coming for the next 4-6 weeks with our Content Roadmap.


AWS

Learning Path: AWS Data Analytics Specialty (DAS-C01) Certification Preparation (Preview)

This certification Learning Path is specifically designed to prepare you for the AWS Certified Data Analytics – Specialty (DAS-C01) exam. It covers all the elements required across all five of the domains outlined in the exam guide.

Course: AWS Databases used with Data Analytics

This course introduces a number of different AWS database services that are commonly used with data analytics solutions and that will likely be referenced within the AWS Data Analytics Specialty certification. As such, this course explores the following database services: Amazon RDS, Amazon DynamoDB, Amazon ElastiCache, Amazon Redshift.

Course: Overview of Differences Between AWS Database Types

This course provides a high-level overview of the managed database offerings available from AWS. It covers relational and non-relational databases, how they work, their strengths, and what workloads are best suited for them.

Course: Backup and Restore Capabilities of Amazon RDS & Amazon DynamoDB

This course explores the different strategies that are available for when you need to both back up and restore your AWS databases across Amazon Relational Database Service (RDS) and Amazon DynamoDB. During this course, you will learn about the different backup features that are available in Amazon RDS and DynamoDB, how to identify the differences between them, and when you should use one over the other. 

Course: Data Visualization – How to Convey your Data

This course explores how to interpret your data allowing you to effectively decide which chart type you should use to visualize and convey your data analytics. Using the correct visualization techniques allows you to gain the most from your data. In this course, we will look at the importance of data visualization, and then move onto the relationships, comparisons, distribution, and composition of data.

Course: Security best practices when working with AWS Databases

This course explores the security best practices when working with AWS databases, specifically looking at RDS and DynamoDB with some extra content related to Aurora. This course is recommended for anywho who is looking to broaden and reinforce their AWS security understanding, or anyone who is interested in creating secure databases in general.


Azure

Learning Path: AI-900 Exam Preparation: Microsoft Azure AI Fundamentals (preview)

This learning path is designed to help you prepare for the AI-900 Microsoft Azure AI Fundamentals exam. Even if you don’t plan to take the exam, these courses and hands-on labs will help you gain a solid understanding of Azure’s AI services.

Course: Implementing High Availability Disaster Recovery from Azure SQL Databases

This course examines the features that Azure provides to help you make sure your SQL databases, whether they are managed in the cloud or on-premises, are not the point of failure in your systems.

Course: Introduction to Azure Storage

This course is intended for those who wish to learn about the basics of Microsoft Azure storage, covering the core storage services in Azure and the different storage account types that are available. You’ll watch a demonstration that shows you how to create a storage account in Microsoft Azure, then move on to more detail.

Course: Designing for Azure Identity Management (update)

This Designing for Azure Identity Management course will guide you through the theory and practice of recognizing, implementing, and deploying the services on offer within your enterprise. Learn how to better the protection of your organization by designing advanced identity management solutions. 

Course: Managing Code Quality and Security Policies with Azure DevOps

This course explores how to manage code quality and security policies with Azure DevOps, and will help those preparing for Microsoft’s AZ-400 exam. 

Lab Playground: Azure Notebooks Machine Learning Playground

Azure Notebooks enables data scientists and machine learning engineers to build and deploy models using Jupyter notebooks from within the Azure ML Workspace. A full Jupyter notebook environment is hosted in the cloud and provides access to an entire Anaconda environment. The tedious task of set up and installing all the tools for a data science environment is automated. Data scientists, teachers, and students can dive right into learning without spending time installing software. 

Lab Challenge: Azure Machine Learning Challenge

In this lab challenge, you will take on the role of a data scientist and complete several tasks within the Azure Machine Learning service. You will train a machine learning model using an Azure Notebook or the Azure Machine Learning GUI. After the model has been trained and is ready for production, you will deploy it as a web service using Azure Container Instances.


Data Science/Artificial Intelligence

Learning Path: Wrestling with Data

In this learning path, we dive into the various tools and techniques available for manipulating information and data sources. We then show you how you can use this knowledge to actually solve some real-world problems.

Learning Path: Introduction to Data Visualization

This learning path explores data sources and formatting, and how to present data in a way that provides meaningful information. You’ll look at data access patterns, and how different interfaces allow you to access the underlying information. This learning path also provides a practical, real-world example of how all this theory plays out in a business scenario. 

Course: Data Wrangling with PANDAS

In this course, we are going to explore techniques to permit advanced data exploration and analysis using Python. We will focus on the Pandas library, focusing on real-life case scenarios to help you to better understand how data can be processed using Pandas.

Course: Wrestling With Data

In this course, we’re going to do a deep dive into the various tools and techniques available for manipulating information and data sources along with showing you at the end of it how you can actually solve some real-world problems. If you are trying to handle increasingly complex data sets and round out your experience as a professional data engineer, this is a great course to get a practical field-based understanding.


DevOps

Hands-on Lab: Installing and Running Applications with Docker Enterprise Universal Control Plane

In this lab, you will learn how to install UCP onto bare Docker hosts to create a multi-node installation from the ground up. You will also learn how to deploy applications onto the UCP cluster you create using the web interface.

Lab Challenge: Docker Swarm Playground

This playground provides a Docker swarm cluster comprised of one manager node and two worker nodes. You have full access to swarm nodes with, each having docker, docker-compose, and relevant command-line completions already installed and running. You have full access to the underlying host as well so you are not restricted compared to an environment that runs Docker in Docker (dind).

Hands-on Lab: Deploying Infrastructure with Terraform

Terraform is an infrastructure automation tool that allows companies to manage infrastructure through code. This provides many benefits such as greater recovery, predictability, and speed. In this lab, you will create a Terraform configuration to deploy a VPC in AWS.

Course: Tech Talk: Building Automated Machine Images with HashiCorp Packer

In this Tech Talk, you’ll watch a presentation on HashiCorp’s Packer and how it can be used to build automated machine images and then deploy the new image into a production environment with a CI/CD pipeline. You’ll follow virtually as a group of IT professionals discuss the tool and its uses. 


Google Cloud Platform

Course: Managing GCP Operations Monitoring

This course shows you how to monitor your operations on GCP. It starts with monitoring dashboards. You’ll learn what they are and how to create, list, view, and filter them. You’ll also see how to create a custom dashboard right in the GCP console.

Course: Managing and Investigating Service Incidents on GCP

Managing and investigating service incidents is an important part of the maintenance process. It is a necessity that can be laboring but with the right organization, understanding of the systems, the knowledge of processes, and the discipline to adhere to best practices, it can be optimized. This course will focus on the predominant parts of managing service incidents and utilizing Google Cloud Platform to aid in the endeavor.

Hands-on Lab: Scaling an Application Through a Google Cloud Managed Instance Group

In this lab, you will create an instance template, an instance group with the autoscaling enabled, and you will then attach an HTTP load balancer to the instance group to load balance the traffic to the VM group. You will also perform a stress test to check that the autoscaling is working properly.

Lab Challenge: Google Cloud Scaling Applications Challenge

In this lab challenge, you will need to prove your knowledge of highly available and scalable applications by creating infrastructure on Google Compute Engine. The objectives you will need to achieve represent essential skills that a Google Certified Associate Cloud Engineer and Google Certified Professional Cloud Architect need to have. 

Lab Challenge: Google Cloud SQL Challenge

In this lab challenge, you will need to prove your knowledge of Google Cloud SQL by creating a production ready Cloud SQL instance. The objectives you will need to achieve represent essential skills that a Google Certified Associate Cloud Engineer and Google Certified Data Engineer need to have.


Programming

Course:  Building a Python Application: Course One

One of the best ways to learn new programming languages and concepts is to build something. Learning the syntax is always just the first step. After learning the syntax the question that arises tends to be: what should I build? Finding a project to build can be challenging if you don’t already have some problems in mind to solve. This course is broken up into sprints to give you a real-world development experience, and guide you through each step of building an application with Python.

Hands-on Lab: Introduction to Graph Database With Neo4j

In this lab, you will understand the core principles of a graph database (especially a property graph) and you will install the Neo4j DBMS on an EC2 instance. This lab is intended for data engineers who want to switch to the graph data model or developers who need to build an application based on a graph database.

Hands-on: Lab: Constructing Regular Expression Character Classes

Regular Expressions are a tool for searching and manipulating text. In this lab, you will use the Python programming language to learn the basics of how to use a regular expression and you’ll learn about the different character classes available for matching different types of characters.

Hands-on: Lab: Working with Regular Expressions: Special Characters and Anchors

In this lab, you will learn how to use quantifiers to match sequences of characters in different ways, you will learn about anchors, and you will learn how to use capture groups. 

Hands-on: Lab: Using Regular Expressions Effectively in the Real World

In this lab, you will learn how to use quantifiers to match sequences of characters in different ways, you will learn about anchors, and you will learn how to use capture groups. 


Webinars

Office Hours: AWS Solutions Architect – Associate | Domain 1 of 4: Design Resilient Architectures

Take a deep dive on Domain 1: Design Resilient Architectures of the AWS Solutions Architect – Associate exam.

Office Hours: Decoupling Architectures Like There’s No Tomorrow

Learn all about decoupling architectures, how to set them up, and manage them, successfully with our experts.


Platform

Learn, Grow, Succeed: Introducing Training Plans for Individuals

We’re excited to announce that we’ve released a new game-changing feature for individual users that has already proven to help our largest enterprise customers increase newly acquired skills by 5x: Training Plans.


Stay updated

As always, we use Cloud Academy Blog to keep you up-to-date on the latest technology and best practices. All of our new blogs, reports, and updates go directly to our Cloud Academy social media accounts. For the latest updates, follow and like us on the following social media platforms:

The post New Content: AWS Data Analytics – Specialty Certification, Azure AI-900 Certification, Plus New Learning Paths, Courses, Labs, and More appeared first on Cloud Academy.

]]>
0
New Content: Azure DP-100 Certification, Alibaba Cloud Certified Associate Prep, 13 Security Labs, and Much More https://cloudacademy.com/blog/new-content-azure-dp-100-certification-alibaba-cloud-certified-associate-prep-13-security-labs-and-much-more/ https://cloudacademy.com/blog/new-content-azure-dp-100-certification-alibaba-cloud-certified-associate-prep-13-security-labs-and-much-more/#respond Tue, 15 Sep 2020 03:18:20 +0000 https://cloudacademy.com/?p=44033 This past month our Content Team served up a heaping spoonful of new and updated content. Not only did our experts release the brand new Azure DP-100 Certification Learning Path, but they also created 18 new hands-on labs — and so much more! New content on Cloud Academy At any...

The post New Content: Azure DP-100 Certification, Alibaba Cloud Certified Associate Prep, 13 Security Labs, and Much More appeared first on Cloud Academy.

]]>
This past month our Content Team served up a heaping spoonful of new and updated content. Not only did our experts release the brand new Azure DP-100 Certification Learning Path, but they also created 18 new hands-on labs — and so much more!

New content on Cloud Academy

At any time, you can find all of our new releases by going to our Training Library and finding the section titled “New this month in our library.” You can also keep track of what new training is coming for the next 4-6 weeks with our Content Roadmap.


Alibaba

Learning Path: Alibaba Cloud Certified Associate (ACA) Preparation

Just starting out in the world of Alibaba Cloud? Let this learning path be the first step on your journey. It covers the essential aspects of this fast-growing platform, introducing you to the fundamental services you’ll need in order to build your own Alibaba Cloud infrastructure.

Course: Alibaba Server Load Balancer (SLB)

This course provides an introduction to Alibaba’s Server Load Balancer service, also known as SLB. The course begins with a brief intro to load balancing in general and then takes a look at Alibaba SLB and its three main components.


AWS

Course: Understanding Amazon RDS Performance Insights

This course explores Amazon RDS Performance Insights, a performance monitoring and tuning feature that can quickly assess the load on a database hosted inside Amazon RDS and determine when and where to take action.

Course: Understanding RDS Scaling and Elasticity

This course explores how to scale your RDS databases.  It covers scaling based on reads or writes, and what it means to scale horizontally or vertically. Additionally, it covers sharding databases as a way to increase write performance and when it needs to be considered as an option.

Course: Using Automation to Deploy AWS Databases

This course explores how to use automation when creating Amazon RDS databases. It includes using AWS Secrets Manager for increasing the security of provisioned resources by limiting human intervention.

Hands-on Lab: Efficiently Storing Data in S3 for Data Analytics Solutions

Amazon S3 is a fully managed service for storing data in the cloud. S3 frees you from managing servers, NAS and SAN devices, and from worrying about individual physical disks. In this lab, you will create data, store it in S3, and transform the data to be more performant and cost-efficient.

Lab Challenge: Data Analytics Processing Challenge

Put your skills to the test in this data analytics lab. Complete a data analytics processing solution before time runs out. You will need to be familiar with Amazon’s Data Analytics services and associated tools in order to complete a partially built solution for processing log data from an EC2 instance using Amazon Kinesis, AWS Lambda, and Amazon S3.


Azure

Learning Path: DP-100 Exam Prep: Designing and Implementing a Data Science Solution on Azure (preview)

This learning path is designed to help you prepare for Microsoft’s DP-100 Designing and Implementing a Data Science Solution on Azure exam. Even if you don’t plan to take the exam, these courses and hands-on labs will help you learn how to use Azure’s machine learning solutions.

Course: Introduction to Azure Machine Learning

Machine learning is a notoriously complex subject that usually requires a great deal of advanced math and software development skills. In this course, you will learn the basic concepts of machine learning and then follow hands-on examples of choosing an algorithm, running data through a model, and deploying a trained model as a predictive web service.

Course: Using the Azure Machine Learning SDK

Learn how to operate machine learning solutions at cloud scale using the Azure Machine Learning SDK. This course teaches you to leverage your existing knowledge of Python and machine learning to manage data ingestion, data preparation, model training, and model deployment in Microsoft Azure.

Course: Analyzing Resource Utilization on Azure (update)

This course looks into how to capture log data and metrics from Azure services and feed this information into different locations for processing. We take a look at diagnostic logging, which can help to troubleshoot services and create queries and alerts based on that data. We also look into Azure Adviser, cost consumption reporting, and how we can baseline resources. 

Hands-on Lab: Tuning Hyperparameters with Hyperdrive in Azure Machine Learning

In the world of data science, model parameters are the elements generated from training a dataset. In contrast, a hyperparameter is a parameter used to control the outcome of training the model. In this lab, you will dive into Azure Notebooks and launch a Jupyter notebook to create a Hyperdrive experiment and perform hyperparameter tuning against a regression training model.

Hands-on Lab: Building Azure ML Pipelines with Azure Machine Learning SDK

With the Azure Machine Learning SDK comes Azure ML pipelines. Machine learning engineers can create a CI/CD approach to their data science tasks by splitting their workflows into pipeline steps. In this lab, you will dive into Azure Notebooks and launch a Jupyter notebook to build an Azure ML pipeline that ingests data, trains a model, and deploys a web service.


Data Science/Artificial Intelligence

Learning Path: The Beginner’s Guide to Machine Learning and Artificial Intelligence

This Learning Path provides an introduction to Machine Learning for beginners and​ is designed to be a gentle introduction, which means we’ll be starting at the ground up and focusing on giving students the tools and materials they need to navigate the space. 

Learning Path: The Basics of Data Management, Data Manipulation and Data Modelling

Take this learning path to get the basics of everything data-related. Learn about data sources, data formats, databases, SQL. This learning path focuses on understanding common data formats and interfaces. It explores some common data formats that you’ll encounter as a data engineer, and you’ll get a deep understanding of the pros and cons of storing your data in different ways.

Learning Path: Moving From Spreadsheets

This learning path gives you a fully hands-on introduction to machine learning with a focus on databases. It is comprised entirely of our interactive hands-on labs which means that you will get practical experience from the get-go.

DevOps

Hands-on Lab: Spinnaker Pipelines – Deploying Resources into Kubernetes

Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. In this Lab scenario, you’ll first install and configure Spinnaker into its own dedicated Kubernetes cluster. As you advance, you’ll get into Spinnaker Pipelines, using Spinnaker with Kubernetes clusters, and extending the build and deployment even further.

Lab Challenge: Ansible Configuration Management Troubleshooting Challenge

In this lab challenge your Ansible configuration management skills are put to the test. You need to complete several tasks using Ansible to configure a host to serve a web application over HTTP and a web page over HTTPS. Use your knowledge of Ansible concepts and the command-line interface (CLI) to troubleshoot and resolve the issues in the provided configuration to pass all of the checks before time runs out.

Lab Challenge: Terraform Deploy AWS Resources Challenge

In this lab challenge, you will put your infrastructure development skills to the test. You will be tasked with developing an infrastructure solution using Terraform by modifying an existing Terraform configuration to include deploying a subnet and EC2 resource.

Course: Introduction to Kubernetes – refresh

Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.

Hands-on Lab: Webserver Creation using Pulumi to Manage Infrastructure

Pulumi is an Infrastructure as Code tool that supports multiple cloud providers. The key feature of Pulumi is that it allows you to describe your infrastructure using any of the popular programming languages it supports. In this lab, you will learn how to install and configure the Pulumi command-line tool, and you will learn how to use it to create a web server.


Security

Hands-on Lab: Cryptanalysis of Substitution Ciphers

Monoalphabetic ciphers are simple substitution ciphers where only one alphabet is used to substitute the characters from the plaintext and replace them one-for-one, where each character in the plaintext is always substituted with the same character in the ciphertext. This means that these ciphertexts are susceptible to frequency analysis. In this lab you will explore this by decrypting a classical cipher, the Caesar cipher.

Hands-on Lab: Asymmetric Encryption RSA Demonstration

RSA is a modern asymmetric encryption standard used in public key cryptographic schemes to share secret values and electronically sign documents. The algorithm works by using modulo mathematics to share numbers between parties, encrypting a value with a public key which only the holder of a separate, secret, private key. Each key pair is generated is calculated using several mathematic functions and equations. You will explore the RSA encryption algorithm in this lab by encrypting and decrypting a message.

Hands-on Lab: Dictionary Attacking a Web Application with Hydra and Burp Suite

In this lab, you will be attacking a Linux machine named Metasploitable, running the Damn Vulnerable Web App (DVWA). The DVWA is an open source web app written to be vulnerable to a host of different security exploits, designed for security professionals to practice their skills and conduct research.

Hands-on Lab: Cracking Hashes with John the Ripper

This lab is part of a series on cyber network security. You will be looking at different hashing algorithms, each of which were commonly used to store passwords on Windows and UNIX systems. You will be using John the Ripper to crack some password files.

Hands-on Lab: Configuring Active Directory Group Policies

You will be using the AD environment to create a new Group Policy Object (GPO) and apply it to a user account in order to enforce some security settings. Group Policy Objects are technical definitions of a user’s permissions and rights within a domain. GPOs are a powerful tool that enable a domain administration team to apply the principle of least privilege across thousands of users quickly and efficiently.

Hands-on Lab: Configuring IPsec on a Windows 2016 Server

IPsec is a framework of open standards for ensuring private, secure communications over IP networks through the use of cryptographic security services. The Microsoft Windows implementation of IPsec is based on standards developed by the Internet Engineering Task Force (IETF) IPsec working group. You will configure IPsec on a Windows Server 2016 machine in this lab. 

Hands-on Lab: SSL Handshake Analysis using Wireshark

Secure Sockets Layer (SSL) is a protocol which allows web HTTPS applications to exchange information securely. Wireshark is a network protocol analyzer that security professionals can use to filter and search through in order to understand traffic that has been logged using tcpdump or a similar tool. You will be analyzing a network traffic capture of an SSL handshake and then using a private key to decrypt and extract a file from the capture.

Hands-on Lab: Exploiting the Heartbleed Bug using MetaSploit

The Heartbleed bug is a serious vulnerability that was discovered to exist on web servers using the OpenSSL cryptographic library, a popular implementation of the TLS protocol for web servers. This exploit will work on any unpatched web servers running an OpenSSL instance in either client or server mode. You will exploit the Heartbleed bug in this lab.

Hands-on Lab: Configuring Access Control Lists on Cisco Routers

Standard ACLs allow a network security technician to control the traffic moving through their network and limit its propagation based on the kind of traffic and the source address. Cisco devices use a proprietary Command Line Interface (CLI) called the Internetworking Operating System (IOS) to configure and manage them. You will be using a Cisco virtual network simulator called Cisco Packet Tracer to learn networking and protocol functions.

Hands-on Lab: Using Snort to Detect a Brute Force Hydra Attack

pfSense is a FreeBSD based router/firewall that can be configured with various plugin modules which can enable network operations and defend a network from malicious behavior in the form of an IDS/IPS module called Snort. You will be conducting a dictionary attack on the Metasploitable DVWA using Hydra and Burp Suite in Kali Linux and attempting to detect it on the router using Snort and the community ruleset. 

Hands-on Lab: Using the Man-In-The-Middle Framework (MITMf) to Bypass HTTPS Strict Transport Security (HSTS)

This exercise will familiarize you with the Man-In-The-Middle framework (MITMf) and how an attacker might use this toolset to attack clients on your network. The particular attack we will be performing in this exercise will be bypassing the HSTS policy. You will use MITMf to bypass HTTPS Strict Transport Security (HSTS) in this lab.

Hands-on Lab: Using the Low Orbit Ion Cannon (LOIC) to Perform Denial of Service (DoS) Attacks

Denial of service can take many forms, but the basic purpose of an attack is to disrupt the availability of a service and to prevent normal operations from occurring. The most common method of attacking availability is network-based. The LOIC tool you will be using in this lab was made famous during Anonymous’ attacks on the church of scientology during Project Chanology.

Hands-on Lab: Performing FilePwn Using the Man-In-The-Middle Framework (MITMf)

The MITMf is a collection of tools, written into an easy-to-use framework by byt3bl33d3r, which an attacker can use to simplify the construction and execution of Man-In-The-Middle (MITM) attacks. The MITM attack you will be carrying out will be utilizing a feature called “FilePwn”, where the attacker can inject a malicious payload into or fully replace a file that the victim is downloading from an HTTP website.


Webinars

Stop Messing Around and Start Passing Your AWS Exams

Preparing for a cert can be stressful, we know it. Wouldn’t it be wonderful if you could gather key insights on how to tackle and pass an AWS exam from someone who has already gone through a bunch of them? Look no further! In this on-demand webinar, our AWS Certification Specialist, Stephen Cole, will guide you through some general strategies that you can apply when taking your first AWS exam, from home or on-site. 

Office Hours: Nail the Google Associate Cloud Engineer Exam

This is your chance to take your career a step further. How? By preparing for the Google Associate Cloud Engineer Exam and passing it, of course! Join Guy Hummel, Google and Azure Content Lead, and Tom Mitchell, Google and Azure Trainer, in this on-demand webinar to learn all of the secrets behind this exam. Our Google experts will break down the five subject areas assessed in the exam, and using samples, explain how to analyze questions to determine the correct answer.


Platform

Introducing SCORM to Create Custom Content With Ease

Your infrastructure is custom-built to fit your organization’s unique needs. Shouldn’t your tech training be, too? Capturing all the nuances of your production environment within your cloud training helps teams perform with accuracy and precision in the real world. With SCORM, you can quickly create personalized training that’s also highly interactive, easy to manage, and easy to scale across teams.


Stay updated

As always, we use Cloud Academy Blog to keep you up-to-date on the latest technology and best practices. All of our new blogs, reports, and updates go directly to our Cloud Academy social media accounts. For the latest updates, follow and like us on the following social media platforms:

The post New Content: Azure DP-100 Certification, Alibaba Cloud Certified Associate Prep, 13 Security Labs, and Much More appeared first on Cloud Academy.

]]>
0
New Content: Alibaba, Azure AZ-303 and AZ-304, Site Reliability Engineering (SRE) Foundation, Python 3 Programming, 16 Hands-on Labs, and Much More https://cloudacademy.com/blog/alibaba-azure-az-303-az-304-site-reliability-engineering-sre-python-3-programming/ https://cloudacademy.com/blog/alibaba-azure-az-303-az-304-site-reliability-engineering-sre-python-3-programming/#respond Wed, 05 Aug 2020 18:27:58 +0000 https://cloudacademy.com/?p=43565 This month our Content Team did an amazing job at publishing and updating a ton of new content. Not only did our experts release the brand new AZ-303 and AZ-304 Certification Learning Paths, but they also created 16 new hands-on labs — and so much more! New content on Cloud...

The post New Content: Alibaba, Azure AZ-303 and AZ-304, Site Reliability Engineering (SRE) Foundation, Python 3 Programming, 16 Hands-on Labs, and Much More appeared first on Cloud Academy.

]]>
This month our Content Team did an amazing job at publishing and updating a ton of new content. Not only did our experts release the brand new AZ-303 and AZ-304 Certification Learning Paths, but they also created 16 new hands-on labs — and so much more!

New content on Cloud Academy

At any time, you can find all of our new releases by going to our Training Library and finding the section titled “New this month in our library.” You can also keep track of what new training is coming for the next 4-6 weeks with our Content Roadmap.


Alibaba

Course: Alibaba Object Storage Service

This course is an introduction to the fundamental aspects of Alibaba’s Object Storage Service (OSS). It starts off by explaining the features and advantages of the service, before moving on to the concepts of OSS and security. You will then watch two demos that use real-life examples from the Alibaba Cloud platform to guide you through storage buckets and object operations.


AWS

Learning Path: Technical Essentials of AWS

This Learning Path is for beginners and is intended for those who are ready to begin their journey into the AWS cloud. It covers AWS and its foundational services of compute, storage, networking, and databases.

Course: Introduction to Amazon Elastic Block Store (EBS)

This is a short course that will introduce you to the Amazon Elastic Block Store (EBS) service, giving you an overview of what the service is and when you would use it as a storage option for your Amazon EC2 instances.

Course: Understanding Costs Associated with Amazon RDS

This course explores the cost metrics associated with the Amazon Relational Database Service, known as RDS. Minimizing cloud spend is always a priority when architecting and designing your cloud solutions, and care should be taken to understand where your costs come from and the steps you can take to reduce them.

Course: Amazon RDS: Introduction to Monitoring

This introductory course provides a solid foundation in monitoring Amazon RDS using AWS tools. It begins by getting you acquainted with monitoring databases hosted on the Amazon RDS service and then moves on to explore the available AWS tools that can be used for this purpose.

Course (UPDATE): Database Fundamentals for AWS (Part 1 of 2)

This course covers Amazon RDS, Amazon DynamoDB, Amazon ElastiCache, and Amazon Neptune. As well as getting a theoretical understanding of these, you will also watch guided demonstrations from the AWS platform showing you how to use each database service.

Course (UPDATE): Database Fundamentals for AWS (Part 2 of 2)

This is the second course in a two-part series on database fundamentals for AWS. This course explores four different AWS database services — Amazon Redshift, Amazon QLDB, Amazon DocumentDB, and Amazon Keyspaces — and the differences between them. As well as getting a theoretical understanding of these, you will also watch guided demonstrations from the AWS platform showing you how to use each database service.

Hands-on Lab: Collecting Log Data with Kinesis Agent and Querying with Amazon Athena

Amazon Kinesis Agent is an application that continuously monitors files and sends data to an Amazon Kinesis Data Firehose Delivery Stream or a Kinesis Data Stream. The agent handles rotating files, checkpointing, and retrying upon a failure. In this lab, you will install and configure Kinesis Agent, use it to collect log entries, and query the log entries with Amazon Athena.

Hands-on Lab: Sessionizing Clickstream Data with Kinesis Data Analytics

Kinesis Data Analytics allows you to make use of existing and familiar SQL skills. It also integrates with other AWS services. You can deliver your results to any destination supported by Kinesis Data Streams or Kinesis Firehose, and use a Lambda function to deliver to external or un-managed destinations. In this lab, you will learn how to use Kinesis Data Analytics to sessionize sample clickstream data and output it to DynamoDB using a Lambda.


Azure

The AZ-303 and AZ-304 exams replace the older AZ-300 and AZ-301 exams, which will retire on September 30, 2020.

Learning Path: AZ-303 Exam Preparation: Technologies for Microsoft Azure Architects

This learning path is designed to help you prepare for the AZ-303 Microsoft Azure Architect Technologies exam. Even if you don’t plan to take the exam, these courses and hands-on labs will help you gain a solid understanding of how to architect a variety of Azure services.

Learning Path: AZ-304 Exam Preparation: Designing a Microsoft Azure Architecture

The AZ-304 exam tests your knowledge of five subject areas. We’ll start with designing Azure monitoring. Next, we’ll show you how to design for identity and security using Azure Active Directory. After that, you’ll learn how to design a data storage solution. Then, you’ll find out how to design a business continuity strategy using services such as Azure Site Recovery. Finally, we’ll cover how to design infrastructure.

Learning Path (UPDATE): AZ-400 Exam Prep: Microsoft Azure DevOps Solutions

This learning path is designed to help you and your team prepare for the AZ-400 Microsoft Azure DevOps Solutions exam. Even if you don’t plan to take the exam, these courses and hands-on labs will help you get started on your way to becoming an Azure DevOps specialist.

Course: Introduction to Azure Resource Manager

In this course, we look at how Azure Resource Manager (ARM) templates can be built and deployed. We start with a simple template and move on to more complex examples to illustrate many of the useful features available when deploying resources with templates. This course contains plenty of demonstrations from the Azure platform so you can see exactly how to use Azure Resource Manager in practice.

Course: Managing Application Configuration and Secrets in Azure

Azure’s App Configuration Service allows you to manage access to settings data and then see how to use it within a .Net application. We will look at using Azure Key Vault in conjunction with App Configuration Service, and how to access Azure Key Vault directly from your application and from apps running in a container within a Kubernetes cluster. This course contains numerous demonstrations from the Azure platform so you can get a first-hand look at the topics we will be covering.

Course: Implementing Version Control on Azure Repos

This course explores how to implement version control on Azure repos. It begins with an overview of what source control is and the different types of source control available. It then looks at the key elements of branching, different branching strategies, and how they impact the development process. You’ll move on to learn about pull requests and merging as repository functions, and the different merging scenarios available to you.

Course: Adding Mobile Devices to Your Azure DevOps Strategy

This course dives into creating a DevOps strategy for mobile applications using the Visual Studio App Center. The App Center gives us a centralized location where we can implement build services, carry out mobile UI testing with multiple devices sets, create public and private distribution groups, and perform release management for our distribution groups.

Course: Designing an Infrastructure and Configuration Management Strategy

This course provides you with the foundational knowledge required to design an infrastructure and configuration management strategy. It starts by looking at hosting infrastructure — IaaS, PaaS, FaaS, and some modern native app options — before moving on to look at Infrastructure-as-Code. You’ll learn what Infrastructure-as-Code means and the tools and technologies that are used to deploy and manage it.

Hands-on Lab: Creating a Classification Model with Auto ML in Azure Machine Learning Studio

Automated ML is based on breakthrough research from the Microsoft Research division. Experiments can be processed on various different types of computing resources such as Virtual Machines and Kubernetes that can either be local or in the cloud. In this lab, you will create a predictive classification model with Azure’s Automated Machine Learning service to quickly discover the most optimal model for a dataset.


Google Cloud Platform

Course: Introduction to Google Cloud Operations Suite

The Google Cloud Operations suite (formerly Stackdriver) includes a wide variety of tools to help you monitor and debug your GCP-hosted applications. This course will give you hands-on demonstrations of how to use the Monitoring, Logging, Error Reporting, Debugger, and Trace components of the Cloud Operations suite. You can follow along with your own GCP account to try these examples yourself.

Hands-on Lab: Inspecting and De-Identifying Data With Google Cloud Data Loss Prevention

In this lab, you will upload data to a Cloud Storage bucket, start an inspection of this data to understand whether it contains sensitive information, and de-identify sensitive information by using REST APIs.

Hands-on Lab: Starting a Linux Virtual Machine on Google Compute Engine

The big advantage of using Compute Engine VMs is that when you can’t find a VM type that suits your needs, you can create one with custom CPUs and memory size. Compute Engine also allows you to create a group of VMs to meet the scaling requirements. In this lab, you will create a Linux compute engine VM, and you will SSH into the instance by using the browser-based connection.

Hands-on Lab: Starting a Windows Virtual Machine on Google Compute Engine

In this lab, you will create a Windows compute engine VM, and you will connect to it through RDP using the Microsoft RDP client.

Lab Challenge: Google Cloud Basic Compute Engine Challenge

In this lab challenge, you will need to prove your knowledge of the Compute Engine service offered by Google Cloud. The objectives you will need to achieve represent essential skills that a Google Certified Associate Cloud Engineer needs to have. You’ll be given a desired end state and be required to reach it using your knowledge of the Google Compute Engine service.


Data Science/Artificial Intelligence

Hands-on Lab: Handling Variable Data in DynamoDB with Grace

Data can come in all varieties which can be a major drawback of having a fixed schema. NoSQL allows for any type of data to be stored, allowing your schema to evolve. In today’s world with IoT and increasing data complexity, NoSQL is becoming a standard part of many technology stacks. This lab is aimed at students with a basic understanding of Python who want to learn about Amazon DynamoDB. Students will explore programmatically and using the DynamoDB Console interface to insert and scan data.

Hands-on Lab: Reviewing Product Inventory in a Cloud SQL Database

This lab is aimed at beginners who want to move beyond spreadsheets and migrate their data into a database. After completing this lab, students will be able to use a MySQL Database in Google Cloud SQL and populate it from a CSV file. Additionally, students will learn how to use basic queries against their data.

Hands-on Lab: Visualizing Data in Amazon QuickSight

Amazon Quicksight can help you visualize, embed, and share data quickly. These valuable insights allow for users to look at quick summaries of aggregated information to make better business decisions. This lab is aimed at beginners who want to understand how to import and make their first insight.

Hands-on Lab: Working With Complex Rest APIs

The foundation of many applications involves Representational state transfer (REST) API, which is a way of structuring an API so that, through the use of HTTP verbs, one can get data. This lab will dive into understanding GET, POST, DELETE, and PATCH services. Students will learn how to use Python to interact with an API to view, create, update, and delete data.

Hands-on Lab: Dealing With Schema Changes in a Data Service

Databases are extraordinarily powerful in managing sets of data, but what do we do when management wants to add/remove/modify something that code does? This lab is aimed at students with a moderate understanding of data engineering and Python who want to understand how to update schemas to handle new features. We will walk through the changing requirements of a bug tracking application and how to handle them.

Hands-on Lab: Capturing New Knowledge from Your Data

This lab is aimed at students with a moderate understanding of data engineering and Python who want to understand how to perform aggregates on data such as COUNT(), SUM(), and AVG(). We will also look into advanced statements like CASE, and functions like DATEDIFF() and CONCAT(). We will walk through the changing requirements of a bug tracking application and how to handle them.

Hands-on Lab: Using Python to Drive Insights in BigQuery

In today’s world, we have to deal with ever-growing datasets. These often require specialized computer solutions to handle the extremely large volume. Google BigQuery makes it easy to query, process, and visualize large datasets. In this lab, we will get started with BigQuery to analyze a large public dataset.

Hands-on Lab: Drawing Insights with BigQuery ML

Machine learning is proving to be very powerful in gathering insights with your data. BigQuery ML allows the user to perform Machine Learning training, evaluation, and prediction on large sets of data. This lab will explain some of the basic concepts along with an example of training a linear regression and binary logistic regression model.

Hands-on Lab: Performing K-Means Clustering With Python

K-Means learning is a machine learning technique used to divide a dataset into clusters to analyze its results. This classification algorithm divides a large group of data into smaller groups to maximize the similarity between data points. We will walk through applying and analyzing the K-Means clustering algorithm on a set of data using the Python libraries: pandas, scikit-learn, and matplotlib.

DevOps

Learning Path: Site Reliability Engineering (SRE) Foundation

This learning path will provide you with an introduction to the principles and practices that enable enterprises to reliably and economically scale critical services. Adopting SRE into your own organization requires re-alignment, with a new focus on engineering and automation, and the adoption of a range of new working paradigms. The SRE learning path consists of eight courses, each focusing on a particular aspect of SRE.

Course: Introduction to Helm

Helm is a package manager for Kubernetes, used to simplify and enhance the deployment experience for deploying resources into a Kubernetes cluster. This training course explores Helm 3, which is the latest version of Helm building upon the successes of Helm 2. During this course, you’ll learn the fundamentals of working with Helm3, its features, and the key differences between Helm 3 and Helm 2.


Programming

Hands-on Lab: Python 3 Programming

This is a technical lab that introduces the Python 3 programming language. Python has enjoyed popularity in a variety of fields for its large scope of application and its ease of use. With Python 2 having moved to end-of-life status on January 1, 2020, there are also large amounts of legacy code that need porting to Python 3.


Webinars

Crush Mediocrity: 3 Steps to Unleash Your Team’s Cloud Potential

Learn about the main challenges that companies face and how to overcome them with an efficient and business-oriented tech training program.

Increase Big Data Application Throughput and Save up to 90% on Compute Costs

Are you running big data workloads on frameworks such as Spark or Hadoop? Join Chad Schmutzer, AWS Principal Developer Advocate, and Stuart Scott, Cloud Academy AWS Content & Security Lead, in this on-demand webinar as they explain how you can accelerate your big data application performance and lower costs by using Amazon EC2 Spot Instances with Amazon EMR.

Office Hours: Unraveling the AWS Technical Essentials

Are you getting ready to take your first steps into the AWS cloud? Are you considering the AWS Solutions Architect Associate exam? Cloud Academy can help you. If you want your first steps to have impact and focus, watch this on-demand webinar with our experts, Stephen Cole — AWS Certification Specialist and Will Meadows — Senior AWS Trainer, to learn about AWS’s foundational services: Compute, Databases, and Storage.


Stay updated

As always, we use Cloud Academy Blog to keep you up-to-date on the latest technology and best practices. All of our new blogs, reports, and updates go directly to our Cloud Academy social media accounts. For the latest updates, follow and like us on the following social media platforms:

The post New Content: Alibaba, Azure AZ-303 and AZ-304, Site Reliability Engineering (SRE) Foundation, Python 3 Programming, 16 Hands-on Labs, and Much More appeared first on Cloud Academy.

]]>
0