Machine Learning | Cloud Academy Blog https://cloudacademy.com/blog/category/machine-learning/ Fri, 14 Jul 2023 15:32:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 Embracing the AI Revolution: A Survival Guide for Tech Pros https://cloudacademy.com/blog/embracing-the-ai-revolution/ https://cloudacademy.com/blog/embracing-the-ai-revolution/#respond Thu, 13 Jul 2023 03:53:00 +0000 https://cloudacademy.com/?p=55046 Embrace the AI revolution with these five survival tips! Navigate the new world of generative AI and leverage it for growth and success. Turn the tide of change to your advantage by learning, adapting, and governing AI, and retain your relevance in an AI-powered world.

The post Embracing the AI Revolution: A Survival Guide for Tech Pros appeared first on Cloud Academy.

]]>
In the current technological landscape, generative artificial intelligence (AI) tools are becoming increasingly common, leaving many professionals with a barrage of questions: How will these tools impact our roles? Could they replace us? Do we need to re-skill? This post aims to quell those concerns and guide you through this digital revolution, ensuring you not only survive but also prosper.

As disconcerting as it may be, we should try to remember that throughout history, humans have thrived through tons of disruptive changes. The current AI revolution is simply the latest change. It’s crucial to realize that no matter your role, AI presents an opportunity rather than a threat. Here are five tips to navigate and leverage this technological shift.

1. Don’t panic (it’s not the apocalypse)

Generative AI can seem threatening, but it’s beyond important to remind ourselves that this is not an apocalypse. We often view artificial intelligence through the lens of pop culture, where machines take over and humanity ends up in a dystopian world (think I, Robot.). But this outlook isn’t grounded in reality.

Our fear stems from a psychological process known as “catastrophizing,” where we predict the worst possible outcome, which then spirals out of control. This phenomenon, while hardwired into our mostly useful survival instincts, isn’t necessarily beneficial when it comes to evaluating technological advances like AI. Instead, it can stifle innovation and growth – and spark fear.

The arrival of AI isn’t a catastrophe; it’s an opportunity for growth and evolution. While it’s essential to prepare and adapt, the transition to a world incorporating more AI doesn’t mean human obsolescence. We’ve seen tech revolutions before — the advent of personal computers, the internet, and mobile phones — and we’ve adapted and grown each time. AI is just the next step.

Psst! Wanna see a more in-depth version of that video? Check out the extended version here.

2. Learn fast and lead from the front

Just as AI learns and evolves, we do, too. AI is only as smart as the information it’s been fed, which means your human experience and knowledge still hold immense value. Remember, everyone starts at the same place with AI: square one. It’s a level playing field, and you have the same opportunity to learn and adapt as everyone else. (We’ll even make it easy for you to start: learn about ChatGPT prompts, completion, and tokens or ChatGPT prompt engineering, role prompts, and chain prompting right in the Cloud Academy platform.) 

One aspect of this learning process is understanding generative AI and its relation to your field of expertise. With companies like Google, Microsoft, and AWS at the forefront of AI development, we’re witnessing advancements beyond imagination.

Consider this: AI today is based on language models that use vast amounts of data to generate content. Understanding the intricacies of these models can allow you to leverage them more effectively, delivering better outcomes. This journey of learning is more of a marathon than a sprint, but the rewards are more than worth the investment.

3. Your knowledge is the ultimate differentiator

In the evolving landscape of AI, your work knowledge is your best asset. As more tasks get automated, the value isn’t in performing these tasks, but in overseeing them — defining the tasks, setting the parameters, and ensuring the outcomes align with your goals.

Rather than viewing AI as a threat to your job, consider it a powerful tool that you can direct. By understanding the basics of AI and how it relates to your role, you can transform from being the one executing tasks to the one overseeing and directing these tasks, a shift that significantly elevates your role.

For example, consider a production assistant. While certain functional tasks might be automated, the human touch in defining, overseeing, and assuring the quality of these tasks is indispensable. The focus isn’t on doing the task but managing the process, providing more significant value to your organization.

4. Direct and guide AI

As we transition into an era where AI plays a more substantial role in the workforce, we also need to navigate the challenges this presents. AI is a powerful tool, but it’s still a tool — it needs guidance and governance. You, with your understanding of your field, are the best suited to provide this guidance.

You’re not just learning about AI for your benefit. You’re learning so you can guide AI to better perform tasks in line with your organization’s needs. AI can perform tasks, but it can’t understand the nuance or context — again, it needs you for that.

Governance is essential for any AI implementation. Ensuring the AI follows brand and business guidelines, creating a template for consistent use, deciding how much of a process should be automated — all these considerations need human oversight. Equally important are mitigation strategies for when things go awry.

5. Master the art of communication

In the realm of AI, clear, concise, and direct communication is key. We must learn to communicate with AI much differently than we do with our colleagues and team members. Instructions for AI should be outcome-oriented and procedural. At the same time, it’s essential to communicate this new approach to your human colleagues and foster a common understanding of how AI is to be used within your organization.

The integration of generative AI into the workforce will be a transformative journey, but it’s one that we’re embarking on together. By understanding and leveraging AI, we can navigate the changes it brings, ensuring we remain not just relevant, but indispensable in our respective roles.

As we prepare for the AI revolution, remember, we’re all in this together. So, let’s not just survive, but thrive in the age of AI.

A final word

While the AI revolution might seem intimidating, viewing it through the lens of opportunity rather than a threat can open doors for advancement. By following these survival tips, you can leverage AI to augment your capabilities, streamline processes, and lead in your field.

The key is to adapt and evolve, understanding that AI is a tool to be directed rather than a replacement for human intelligence. The future of AI is promising, and by embracing it, we’re making sure that we remain not just relevant but essential in the workforce of tomorrow.

The post Embracing the AI Revolution: A Survival Guide for Tech Pros appeared first on Cloud Academy.

]]>
0
Translating Machine Learning Outputs Using Shapley Values https://cloudacademy.com/blog/translating-machine-learning-outputs-using-shapley-values/ https://cloudacademy.com/blog/translating-machine-learning-outputs-using-shapley-values/#respond Wed, 13 Jul 2022 01:00:00 +0000 https://cloudacademy.com/?p=50063 One thing is guaranteed for every data scientist: when you face the problem of classification or regression, you always have to complete a series of actions crucial to the quality of the final output. These actions can be broken down into three steps. Dataset creation, where data is analyzed, cleaned,...

The post Translating Machine Learning Outputs Using Shapley Values appeared first on Cloud Academy.

]]>
One thing is guaranteed for every data scientist: when you face the problem of classification or regression, you always have to complete a series of actions crucial to the quality of the final output. These actions can be broken down into three steps.

  1. Dataset creation, where data is analyzed, cleaned, enriched, and used as the model’s inputs;
  2. Training phase, where models are implemented and each instance is fine-tuned based on the info extrapolated during the dataset creation phase and with the ultimate goal of selecting the model that guarantees the best performance; and finally
  3. Inference phase, where the trained model is made available to a service that provides ultimate inference on new data.

And while a data scientist may have control over training, generally regarding data selection, they’re still often challenged with empowering people who aren’t tech-savvy to benefit from artificial intelligence-based results.

For example, a data scientist may need to explain to a physician — who typically has little experience with mathematics and statistics — the reasons an AI-based mechanism predicted whether a patient is at risk of suffering from a heart attack. This information is crucial for many reasons: from a health point of view, the usability of such information can allow the physician to provide the best treatment to each single patient, as well as to focus on more at-risk patients; from a technical point of view, the physician can better understand the combination of features that provided a particular output.

So a data scientist’s common challenge is explaining a model’s output — in this case, predictions regarding a patient — not just to people who aren’t familiar with mathematics and statistical modeling, but to themself to better understand which features have a bigger impact on the model’s outcome.

Indeed, most of the time a model’s output is used differently by different stakeholders, who have varying needs as well as varying technical and analytical skills. What we, as machine learning specialists, think of as simple (such as a querying a DWH to get information about a particular user or interpret a probability measure), is, for many, incredibly confusing.

That’s why a data scientist often finds themself between the so-called inference accuracy and the explainability dilemma. Nowadays, excellent production performance isn’t enough; one must be able to democratize the result, allowing a potentially heterogeneous and fragmented audience of people to understand the final result.

Recently, the concept of eXplainable AI (XAI) has become popular and a few solutions have been proposed in order to make black-box machine learning models not only highly accurate, but also highly interpretable by humans. Among the emerging techniques, two frameworks have been widely recognized as the state-of-the-art: the Lime framework and the Shapley values.

The goal of this blog post is essentially to shed light on the use of Shapley values ​​in order to easily explain the result of any Machine Learning model, and to make the output available to a greater audience of stakeholders. In particular, we will use the python SHAP (SHapley Additive exPlanations) library, which was born as a companion of this paper and which has become a point of reference in the data science field. For more details, we invite you to look at the online documentation, which contains several examples from which you can get inspired.

Here at Cloud Academy, we had a Webinar series entitled Mastering Machine Learning, where we discussed – in more general terms – the use of the SHAP library with respect to a Classification task. You can find a detailed description of the agenda here.

Real-life use case

Let’s talk about a famous dataset: the Wine Quality Dataset. It consists of a catalog of white and red wines, related to the Portuguese variant Vinho Verde, containing useful information for describing and evaluating the wine itself. For the sake of this example, we’ll only use the red wine dataset with the goal of classifying the quality of each single example.

To support this analysis, you can find a notebook here, where the entire data processing and fitting pipeline is illustrated. Assuming we selected the best classifier, we may encounter a possible dilemma that we can summarize as follows. Two examples, illustrated below:

fixed acidity7.0000fixed acidity8.0000
volatile acidity0.60000volatile acidity0.31000
citric acid0.30000citric acid0.45000
residual sugar4.50000residual sugar2.10000
chlorides0.06800chlorides0.21600
free sulfur dioxide20.00000free sulfur dioxide5.00000
total sulfur dioxide110.00000total sulfur dioxide16.00000
density0.99914density0.99358
pH3.30000pH3.15000
sulphates1.17000sulphates0.81000
alcohol10.20000alcohol12.50000
Figure 2: Two examples from the Red Wine Quality Dataset

These two wines look very similar, but they’re technically different. In practice, our model — which is, in this case, an XGBoost model (Extreme Gradient Boosting) — can learn very detailed patterns in the data that may not be super obvious to the data scientist working the dataset.

In the example in Figure 2, the model correctly predicts the class of bad wines (Figure 2, left) and the class of high-quality wines (Figure 2, right). The performance of the machine learning model is, therefore, good, but we’re left with the task of explaining results in a convincing way. Which features positively affect the output score? Is there any way to see the contribution, in terms of probability score, of each single feature in the final output?

Shapley Values to the rescue!

In their seminal paper, Lundeberg and Lee (2017) used a famous idea from American Nobel Price-winning economist and mathematician Lloyd Stowell Shapely to propose an idea describing the output score of a machine learning model. This idea uses his Nobel prize contribution, known as the Shapley value solution in cooperative game theory, to describe the impact of each single feature to the final output score.

More specifically, the Shapley value is based on the idea that the payoff obtained from a particular cooperative game should be divided among all participants in such a way that each participant’s gain is proportional to their contribution to the final output. This makes sense; everyone contributes differently to the output. This characteristic is particularly useful in machine learning, since we want to divide the overall probability score based on how much each single feature has contributed to it.

The SHAP (Python) library

The SHAP (Python) library is a popular reference in the data science field for many reasons, especially because it provides several algorithms to estimate Shapley values. It provides an efficient algorithm for tree ensemble models and deep learning models. But, more importantly, SHAP provides a model-agnostic algorithm used to estimate Shapley values for any kind of model. An exciting development: recently, a model compatible with Transformers for NLP tasks was made available in the library.

So why is this library useful? Well, we have at least two major explanations that can be simplified with two simple concepts: Global and Local interpretability.

We’ll begin by framing the problem with a more practical argument. Consider the concept of simple arithmetic mean, e.g., what was your average grade at university? If this sounds familiar to you, then you know the average gives you a general idea of the (physical) patterns — like a general idea of how you performed at university.

However, the mean has a huge problem: it is sensible to outliers, but it completely loses the interpretability of local behaviors. What does that mean? Well, an average grade doesn’t tell you how you performed on a particular topic on a particular exam.

The idea of the Shapley values is to go beyond the simple concept of mean, since the values explain a model’s output with respect to all observations (Global interpretability) and each single observation (Local interpretability) because they build a quantitative indicator based on the feature’s values.

Let’s dive into the concept of Global interpretability. Consider a Random Forest Classifier, or any Gradient Boosting-based Classifier, like XGBoost. Training such models means making decisions based on feature values. Technically speaking, the impact of each single feature has on the tree’s splits can be tracked using metrics, which is typically identified by the Mean Decrease Impurity (MDI) criterion.

From a historical point of view, the MDI is also called Gini Importance — named in honor of the Italian Statistician and Economist Corrado Gini, who developed the Gini index, a measure of the income inequality in a society — and used as the splitting criterion in classification trees. 

The idea behind this criterion is pretty straightforward: every time a split of a node is made on a specific variable, the Gini impurity criterion for the two descendent nodes is lower than the parent node. The Gini Importance computes each feature importance as the sum over the number of splits — across all trees — that include the feature proportional to the number of samples it splits.

This is, more generally, called Global Feature Importance (or Global Interpretability) because it takes into account several interactions of the same features across different trees. Put in simpler terms, we can say that the global importance is nothing more than an average over all samples for the contribution of that particular feature during the splitting phase of the tree across all trees. This has a major limitation: it completely loses the explanation for individual observations. 

We can, instead, build a global feature importance based on the Shapley values we observed for each single feature. Then, the SHAP global feature importance is computed using the shapley magnitude of feature attributions. For each single feature, we compute the mean absolute value of the corresponding shapley value for each single observed statistical unit. In this way, we can generalize local contributions to the overall population. In Figure 3, you’ll find an example of Global Importance based on the mean SHAP value.

Global Importance based on the mean SHAP value
Figure 3: The alcohol feature is the most important, changing the predicted absolute quality probability (on average) by approximately 6 percentage points

Shapley values can go beyond the classical global feature importance. Thanks to the estimated Shapley values, we can compute the impact of each single feature on the model prediction for any machine learning model. This is the core of local interpretability, since we can explain each single predicted outcome based on the feature’s values. Indeed, the Shapley values computed for a specific wine allows us to describe the impact of each single feature with respect to the model outcome.

But how does it work?

From a technical point of view, we can identify two important aspects:

  • Base value As the documentation states: “it is the reference value that the feature contributions start from”. Practically speaking, this is the average value observed in the training data, particularly the value we would be able to predict if we didn’t know any user-specific features;
  • g(z) Describes the model output as the sum between the expected model output — the base value — and the current SHAP model output. More formally,

where phi describes the Shapley value for each single feature, denoted by z, and phi_0 denotes the base value. Each single phi value is calculated by taking into account all possible values computed on the considered feature for all possible combinations of features. Check out section 2.4 of the original paper for more details.

Also, note that the Shapley values should be estimated from a model. In this notebook, we’ve fitted an XGBoost Classifier, being the state-of-the-art for these types of prediction problems with tabular style input data of many modalities (typically, numerical and categorical features). We used a TreeExplainer as the underlying SHAP model to explain the output of our ensemble tree models. If you’re interested in learning more, do read the original paper. There are many other SHAP explainers, such as the

  • GradientExplainer, which approximates the Shapley values with expected gradients.
  • LinearExplainer, which computes the Shapley values for a linear model.
  • KernelExplainer, which uses a specifically-weighted local linear regression model to estimate the Shapley values and makes no assumptions about the model type.

From a performance point of view, it’s recommended that you the TreeExplainer with any ensemble tree model (e.g. scikit-learn, XGBoost, catboost). It is, however, possible to use a KernelExplainer, though SHAP model performances might get worse with large batches of data.

For the sake of illustration, let’s consider the following figure that shows the impact of each single feature on the model prediction.


Figure 4: Local Interpretability of a wine’s quality prediction score

In particular, the blue segments describe the impact of the features that contribute the most to increase the prediction score, whereas the red ones are the factors that reduce the impact. This kind of visualization of the model’s outputs is a great tool that allows everybody to understand which features have had the greatest impact on the final model score.

Conclusions

In summary, we learned how to employ the Shapley value to explain any machine learning model output. We used the SHAP python library, which comes with many models and features that easily help the Data Scientist to convert a technical result into a friendly, human-interpretable explanation.

References

The post Translating Machine Learning Outputs Using Shapley Values appeared first on Cloud Academy.

]]>
0
New AWS re:Invent Announcements: Swami Sivasubramanian Keynote https://cloudacademy.com/blog/new-aws-reinvent-announcements-swami-sivasubramanian-keynote/ https://cloudacademy.com/blog/new-aws-reinvent-announcements-swami-sivasubramanian-keynote/#respond Thu, 02 Dec 2021 19:40:55 +0000 https://cloudacademy.com/?p=48032 Today we all got an hour or two of Swami’s time as he went over the many machine learning-focused releases from AWS (a total of 13 in all). Dr. Sivasubramanian is the VP of Amazon Machine Learning, and it’s always cool to hear about anything coming from his department. Machine...

The post New AWS re:Invent Announcements: Swami Sivasubramanian Keynote appeared first on Cloud Academy.

]]>
A look at what was new and interesting from Swami Sivasubramanian’s keynote.

Today we all got an hour or two of Swami’s time as he went over the many machine learning-focused releases from AWS (a total of 13 in all). Dr. Sivasubramanian is the VP of Amazon Machine Learning, and it’s always cool to hear about anything coming from his department.

Machine learning in AWS has been a long time in the works, and I have watched with piqued interest to see how it has evolved over time. When I think back to re:Invent 2017 and the release of Amazon SageMaker it’s amazing to see just how far AWS has pushed the democratization of machine learning technology in just a handful of years.

Before SageMaker it was quite a production to get any kind of machine learning workload running in the cloud. It was technically doable of course, but you needed to have a large amount of AWS experience as well as a wealth of machine learning expertise. However, with each passing year since the release of AWS’s most important machine learning service, the technology has become easier and easier to get into for people of almost any background. 

It was with this excitement for the future of machine learning that I turned on the stream for today’s keynote. I grabbed a cup of coffee and sat back with great hopes that AWS would once again push the bar a little higher and make machine learning a little more friendly for the rest of us.

Amazon DevOps Guru for RDS

A person standing on a stage

Description automatically generated

For today’s first announcement Swami opens up with machine learning-powered RDS performance improvements and availability with Amazon DevOps Guru for RDS. This new feature is an expansion to the already released service Amazon DevOps Guru, that came out in early May of this year. Not quite what I pumped up for, but hey, go ML!

DevOps Guru is focused on using machine learning to help developers improve their applications availability through detecting of operational issues. It does so with metrics collected via events and log data pulled from other AWS services. This new branch helps to extend the service into RDS and gives developers deeper insight into performance issues that might be extremely difficult to diagnose in a more standard way.

I don’t think this one pushes the bar much for helping someone get into machine learning, but if you are a frustrated database admin looking to pull just a shred more capacity through your overburdened systems – this one might just make your year. 

Amazon RDS Custom (Now with support for SQL Server applications) 

A person standing on a stage

Description automatically generated with medium confidence

Striking again at the heart of the audience, Swami throws another database-related release like a quick jab to keep you off balance. I will be frank, I was not prepared for the first database-related release from the machine learning VP, much less a second.

Amazon RDS Custom is a managed database service that helps your applications run on customized operating systems and database environments. Just at the end of October, the service expanded to include Oracle within its purview and today it can now support SQL Server applications.

This service is generally used for legacy, custom, and packaged applications so it’s probably not something the vast majority of people will be super excited about. If you were however hanging on the edge of your seat for SQL Server support on Amazon RDS Custom, holiday presents came early.

Amazon DynamoDB Standard-Infrequent Access Table Class

By now I’m starting to think that Swami is gunning for Raju Gulabani’s job (the VP of AWS Databases and analytics) because we get another database release. Can you be VP of two organizations? Should you be VP of two organizations? When do we get to learn about robots and machine learning stuff? These questions rattle through my brain as I finish my first cup of coffee.

Anywhoo, we are now introduced to Amazon DynamoDB Standard-infrequent access table classes. This new way to store your DynamoDB table threatens to reduce costs by up to 60%. This class is ideal for long-term storage of infrequently accessed data.

It’s very similar to how the S3 infrequent access works in that if you don’t plan to access your data all that often, but it still needs to be ready at a moment’s notice, this will be a huge cost saving. Using this table class does mean that your write will be more expensive.

On the AWS pricing page for the new feature, they post an example that shows:

(Standard Access)

42.5 million writes costing 1.25$ per million

42.5 million reads costing 1.25$ per million

(Infrequent Access)

42.5 million writes costing 1.56$ per million

42.5 million reads costing 0.31$ per million

This does show quite a nice savings overall but be careful of those write costs!

AWS Database Migration Service Fleet Advisor

He did it again… relentlessly releasing database services and improvements like they just fall from the sky. Someday we will return to the path of machine learning and see their great gifts once more, however, we might have to wait up to five minutes for the next announcement.

This new service addition, the AWS Database migration Service Fleet Advisor, promises to help accelerate your database migrations. This is another add-on to the standard Database migration service that is focused on automating the discovery and analysis of your fleet. It collects and analyzes your database schemas and objects to help you build a customized migration plan. In theory, this plan will make it easier to move into AWS without using any third-party tools or outside migration experts.

Time will tell with this one I think… Moving onwards!

Amazon SageMaker Ground Truth Plus

A person standing on a stage

Description automatically generated with medium confidence

It’s happening! By god, it’s finally happening! They said it couldn’t be done, but here we are with some machine learning content! Although what seems to be another theme of today besides databases is add-on services – however, this add-on is impressive in many ways.

This new version of Ground Truth is a hybrid data labeling service that uses both machine learning as well as an ‘expert workforce’ to help label your data. The original version of ground truth simply used Amazon Mechanical Turk to farm out data labeling to third-party vendors or your own private teams. 

This new and improved ground truth PLUS has a few more in-depth steps that will theoretically improve the quality of your data labels. When setting up your new data labeling job you will need to fill out a form that explains the requirements for the data labeling project. This is followed up with a call from AWS Experts that will discuss your project and assumably begin to assemble the required experts your situation needs. You would then upload your data to a predetermined S3 bucket for labeling by the service.

Using the base level Ground Truth tools already in the service, the appointed experts can begin the labeling job as normal. From there, an ML system will hop into the fray and begin to pre-label the images in your dataset based on what the experts have already done. These ML-labeled images will be sent to the experts as well who will double-check its work. This teamwork approach should greatly increase the speed of labeling. 

Overall, I rate this new feature pretty cool out of ten!

Amazon SageMaker Studio Notebook

A person standing on a stage

Description automatically generated with medium confidence

Two in a row for our machine learning update! I feel like this is a good trendline and we are back to heading in the right direction. Swami returns to center stage and unveils another add-on service, but a good one!

Today we are introduced to Amazon SageMaker Studio Notebook. This update adds collaborative notebooks that can be quickly launched because they don’t require compute resources and file storage setup ahead of time. They use a set of instances which are referred to as ‘Fast Launch’ types which are designed to spin up within two minutes (this is WAY faster than normal for a notebook). Amazon SageMaker Studio Notebooks can also easily connect with Amazon EMR and Amazon S3 to import your datasets so you can transform and analyze them as you see fit. 

These notebooks provide persistent storage that even lets you view and share the notebook when the instances are offline. When sharing a notebook, you can create a read-only URL that won’t allow for any changes of your underlying architectures. When your recipient opens the URL they can choose to create a copy of the notebook. This will create a duplicate of the underlying instance type and SageMaker image that your notebook was running on. Overall, this is a very cool addition to the SageMaker Portfolio. 

Three New Amazon SageMaker Infrastructure Updates:

A person standing on a stage

Description automatically generated with medium confidence

Next up we got three infrastructure upgrades for Amazon SageMaker – we are now officially on a roll with the ML content.

The first update is Amazon SageMaker Training Compiler. This new feature allows you to accelerate your deep learning model training by up to 50%. This is accomplished by using the underlying instance GPU ‘more efficiently’. When digging deeper into this update I found that the training compiler accelerates training by converting your Deep learning model (which is written in a high-level language) into a more hardware-optimized language that can be better used by the GPU. Neat!

The second update is Amazon SageMaker Inference recommender. This service was designed to help you choose the best compute option and configurations for deploying your machine learning models based on optimal inference performance vs cost. 

Since there are over 70 ML instance options to choose from, each with differing resource availability, finding the correct one for your model can be very time-consuming. This new update promises to automatically select the right compute instance and type for you. This also includes the number of instances you should run, container parameters, model optimizations, and what have you, to give you the best performance per cost ratio. If all of this works out as described, this will be an amazing quality of life upgrade for pretty much everyone working on machine learning within AWS.

And the final update to the infrastructure is Amazon SageMaker Serverless inference. This new inference option allows you to deploy machine learning models for inference without the need to create or manage the underlying infrastructure. Everything will be automatically provisioned and scaled for you, and compute will be turned off when no longer needed. As per many of these types of services, you will only pay for the duration of running the inference code, and the amount of data processed. 

Amazon SageMaker Canvas

A person standing on a stage

Description automatically generated

Now we are really getting into the good stuff, the machine learning for machine learners, the Crème de la crème. These kinds of updates are what really bring people to re:Invent in my opinion, the things that make technology easier for the masses. Paint by numbers machine learning brought to you by AWS.

Well, it’s not that simple, but Amazon SageMaker Canvas allows you to create and run an entire ML workflow featuring a drag and drop user interface. This new feature of SageMaker allows laymen to start creating ML systems that can help with business analysis and predictions without writing any code or requiring any ML experience.

All you will require is some preexisting data, like a product catalog and some historical shipping data, living in CSV format. This can be imported into SageMaker Canvas manually or even fetched from amazon S3, amazon redshift, or even snowflake. 

With this information, you could create a predictive model within SageMaker Canvas based on the data. Using the model it creates, you can then get a forecast of your next shipments and if they would be late or arrive on time based on any of the factors within your dataset. Based on what I’ve seen so far in the keynote and in the press release online, this is an incredible addition to SageMaker and will gain a lot of traction in the future.

Amazon Kendra Experience Builder

A person standing on a stage

Description automatically generated with medium confidence

Well, we had a good run on the machine learning updates relating to building your own models, time to get back to brass tacks. By this point in the stream, I’ve made it onto coffee number two and Swami swings in with glee to introduce Amazon Kendra Experience Builder.

In case you are unfamiliar with the base service (like I was), Amazon Kendra is an ML power search service designed to work in the enterprise environment to help your employees find scattered content throughout the organization. This content might be in a document, a repository, reports, guides, S3, salesforce, and many other locations.

Amazon Kendra Experience builder works on top of all of that to help deploy a search application that is fully customizable within a few clicks. It doesn’t require any programming or machine learning experience. Everything is built through an intuitive visual workflow that allows you to create a powerful search engine for your dispirit files and figures. It’s very reminiscent of how Amazon Cognito allows you to create your own login page for SSO actions and what have you.

Amazon Lex Automated Chatbot Designer

A person standing on a stage

Description automatically generated with medium confidence

Nearing the end of Swami’s talk Amazon Lex is invited to have a service update with Amazon Lex Automated Chatbot Designer. This new feature to the older chatbot service promises to help reduce the time it takes to create and design an advanced natural language chatbot. This update works to expand the usability of the design phase within Amazon Lex by using machine learning to provide an initial bot design. You will then get to refine and update this base model, hopefully gaining a head start over creating one from scratch.

It does this by using conversation transcripts between your callers and the agents to create common intents and related information. Given enough transcripts, this will greatly reduce the amount of grunt work required to produce a moderately tolerable chatbot. You will of course need to tune and prune the results to fit your use cases, but hey, I like doing less work!

Amazon SageMaker Studio Lab

A person standing on a stage

Description automatically generated with medium confidence

Our second to last release from AWS in the machine learning realm comes to us as another addition to SageMaker, and that would be Amazon SageMaker Studio Lab. This is a free service offered up to help people begin their journey into machine learning.

AWS has created a way for people to quickly hop in, without an AWS account, without a credit card, heck with zero cloud knowledge whatsoever, and begin building an ML model. This is the type of update I would be looking for if I was just starting my ML journey or even remotely curious about the space.

After signing up for your free Studio account you will be given access to a machine learning environment that requires no setup or configuration. You will be allowed to use any framework you want, such as Pytorch, Tensorflow, etc., and up to 12 hours of CPU or 4 hours of GPU time per session to enjoy as you please. Studio Lab is integrated with GitHub so you can download, edit, and run any notebook you want. This is perfect for those who just want to get their feet wet, without any repercussions to their AWS bill.

AWS AI & ML Scholarship Program

Now at the end of the session, Swami lets us know that AWS is offering up a scholarship for those interested in learning about AI and Machine learning. This funding is available for underrepresented and underserved high school and college students. The goal of the program obviously is to help these people prepare and develop themselves for careers in the artificial intelligence and machine learning fields. 

If you happen to fit into those categories, I would highly recommend you take a look at this program, or any program really that is related to ML. Machine learning is the future, and the future is coming a lot faster than many people may realize.

The post New AWS re:Invent Announcements: Swami Sivasubramanian Keynote appeared first on Cloud Academy.

]]>
0
Cloud Migration Series (Step 4 of 5): Adopt a Cloud-First Mindset https://cloudacademy.com/blog/cloud-migration-series-step-4-of-5-adopt-a-cloud-first-mindset/ https://cloudacademy.com/blog/cloud-migration-series-step-4-of-5-adopt-a-cloud-first-mindset/#respond Thu, 20 May 2021 23:00:41 +0000 https://cloudacademy.com/?p=46391 Be sure to subscribe to our blog to be notified when new content goes live! Adopt a Cloud-First Mindset Why should you adopt a cloud-first mindset? As you start on your cloud migration goals, it’s going to be key to repeatedly come back to the fundamentals as people lose focus...

The post Cloud Migration Series (Step 4 of 5): Adopt a Cloud-First Mindset appeared first on Cloud Academy.

]]>
This is part 4 of a 5-part series on best practices for enterprise cloud migration. Released weekly from the end of April to the end of May 2021, each article will cover a new phase of a business’s transition to the cloud, what to be on the lookout for, and how to ensure the journey is a success.

Be sure to subscribe to our blog to be notified when new content goes live!

Adopt a Cloud-First Mindset

Why should you adopt a cloud-first mindset? As you start on your cloud migration goals, it’s going to be key to repeatedly come back to the fundamentals as people lose focus or enthusiasm. We’re talking about the phenomenon known as the “trough of sorrow” where a dip in initiative and output occurs, usually when novelty wears off and learning becomes harder.

To be prepared, it’s important to refer back to your fundamental goals. The new mentality at your organization will consist of the following key understandings, which will eventually become part of the healthy baseline culture:

  • Being aware of cloud tools and services — this is the what in terms of the cloud: understanding the landscape, the providers, and the language so you can have a conversation with stakeholders, vendors, and consultants.
  • Being able to use cloud services effectively, economically, and safely — this is the how, part 1: Safety and economy first. You don’t want to cause any more problems or spend budget unwisely.
  • Understanding how to apply cloud tools to solve customer problems — this is the how, part 2: The other side of being able to use cloud tools safely is being able to apply them to real-world problem-solving.
  • Being able to use your cloud services together to create new products and solutions — this is the why: This piggybacks on the previous point…whether it’s a service or a product, you’re going to want to take advantage of the opportunity to make something new first in the market.

Getting through these steps is a cycle, an ongoing process. As mentioned at the beginning, one part of the journey that always happens as a large group progresses on a big change is the “trough of sorrow.” 

There are always going to be peaks and valleys in your progress, and the faster you can get back to your goals, the better — so how do you do that? There are a few ways to build back momentum.

Certification campaigns

Our practitioners and instructors at Cloud Academy have had lots of opportunities to interact with enterprises at various parts of their cloud transition. What we’ve seen in other engagements is that internally commenced certification campaigns can provide personal motivation to individual team members. These cert campaigns help team members commit to gaining new domain knowledge.

Further, when leaders can incentivize people to get core certifications for a desired specialty — i.e., AI certifications on Azure in order to develop solutions — this helps the employees’ own professional development while at the same time putting the overall team in a strong position to tackle new product initiatives.

Product teams brought up to speed

Remember that it’s not just IT and engineering that needs to be educated and fluent in cloud. For your team to gain maximum benefit from the full offering of cloud technologies, you’ll need all product-oriented roles to be aware of how cloud-based services such as artificial intelligence tools and services can be applied to solve business issues. As a starting point, evaluate areas where you may be struggling with data. Can a turnkey AI solution help here? If not, what types of changes would need to take place in order to leverage some of the positives of a managed service (and later down the road, a custom-built service)?

Unsure of some of your staff’s levels? Get back to basics

We’re going to continue beating the drum on this, but it’s helpful: you will need to assess your entire team’s skill levels, and continually monitor as time goes on. This sounds like a lot of work, which is why a programmatic approach that can scale with your organization is the ideal way to keep learning momentum moving across the board.

Just make sure that the learnings are outcome-oriented: whether it’s certifications, specific job roles, or specific technical tasks, the learning paths that your employees take should have a clear goal.

The last challenge: working together

Once you take all these steps you’ll get back on track, and all teams will become aware of how to solve business issues. But let’s be honest: there will always be the challenge of getting people to work together.

To put it bluntly, the big challenge is how do we get people in cross-functional teams to work together with all these new services and practices?  

The answer is to run practical exercises. These need to be cross-functional projects that are engaging, quick to start, and quick to yield results.

Engagement + Collaboration = Progress

What would be a practical exercise and why would it help? Find a partner with domain expertise both in cloud and upskilling employees, and you’ll be able to get guidance to create blueprint-like exercises that can be applied to projects.

These blueprints can work across teams, with contributions across IT, Engineering, Product, and collaboration between all managers. Further, the bar for the learning experiences gets higher every year, with learners wanting the ability to have very little friction when learning (think about coding labs starting in 30 seconds vs. 3 hours of installing and troubleshooting software). It also makes more and more sense to learn together with coworkers, as opposed to the current single-player experience. This shows that the area of collaboration is set to be huge and will drive a better user experience and faster, more communal learning.

Conclusion

This type of engagement at the individual and communal level, with real-time tracking and modification of progress and goals, is going to be key to helping any team stay focused as they go through their cloud transformation. Understand that no matter the size of the organization, there will need to be attention paid to learners and managers as their attention naturally wavers. Having a concrete plan before addressing this makes it infinitely easier to address natural speed bumps and challenges as they occur in the learning and cloud migration process.

Ready, Set…Cloud!

If you’d like a preview of what our blog series will cover in a more in-depth fashion, this guide is a great start. We share some best practices and insights gained from our experience helping many organizations on their journey to cloud success. Use it as a helpful reminder to stay on track.

5-Steps-Cloud-Migration

The post Cloud Migration Series (Step 4 of 5): Adopt a Cloud-First Mindset appeared first on Cloud Academy.

]]>
0
Cloud Migration Series (Step 3 of 5): Assess Readiness https://cloudacademy.com/blog/cloud-migration-series-step-3-of-5-assess-readiness/ https://cloudacademy.com/blog/cloud-migration-series-step-3-of-5-assess-readiness/#respond Thu, 13 May 2021 12:31:50 +0000 https://cloudacademy.com/?p=46323 Be sure to subscribe to our blog to be notified when new content goes live! Assessing your ready state Last time, we talked about detailed planning that forms the foundation of your cloud migration effort. Now it’s time to really understand what your team can do, and how you can...

The post Cloud Migration Series (Step 3 of 5): Assess Readiness appeared first on Cloud Academy.

]]>
This is part 3 of a 5-part series on best practices for enterprise cloud migration. Released weekly from the end of April to the end of May 2021, each article will cover a new phase of a business’s transition to the cloud, what to be on the lookout for, and how to ensure the journey is a success.

Be sure to subscribe to our blog to be notified when new content goes live!

Assessing your ready state

Last time, we talked about detailed planning that forms the foundation of your cloud migration effort. Now it’s time to really understand what your team can do, and how you can help them get to a place where they’re empowered to enact a big organizational shift.

Making a big change like this is a huge undertaking when you work in a large organization. There’s little guidance out there for executive teams on just how much learning effort is required to turn the ship around. What’s needed is to have your Learning & Development team present a clear direction on the training goals — the result will go far to ensure that things go smoothly.  

Why is the buy-in from L&D important? Some of the biggest mistakes happen in the early stages of cloud adoption when enthusiasm for new tasks and appetite for experimentation is high, but knowledge of best practices is low or non-existent. In order to get into a cloud console and start doing things, your staff doesn’t necessarily need to pass a cloud certification exam before they can spin up instances. It’s easy for people to make expensive mistakes in the early stages of adoption if they don’t know or understand the proper operating procedures. L&D needs to be the first line of defense in the first stages. 

So our first goal needs to be understanding where your team stands with regard to technical skills, followed by a plan to get them to a ready state to accomplish the technical goals that you laid out in the planning phase.

Readiness is both a state and a process

You’ll start by accurately pinpointing your team’s baseline skills. Once they are on track, you’ll then want to upskill them again to make sure they stay up to date with constantly changing cloud technology.

This brings us to a main point. As you assess your team, grow their skills, and make sure they’re on track, you’ll realize that this is actually a process of continuous development. Just like agile software teams that work in sprints and constantly deploy code updates to certain parts of an app in order to stay current and update products, the readiness stage of your cloud migration is the same. It will be a culture change for your organization, one with positive impacts because your teams will be ready, and will be motivated by individual and group growth and success.

Here are some highlights of what a readiness program should entail:

Pre-assessment

To get started, you’ll need to determine your team’s current skills and capabilities. This means creating a breakdown for each individual member that can be updated as they progress through their learnings.

Skill Assessment

You don’t want to be in the dark about your team’s abilities, so you’ll need a full view in order to build on each member’s development. This helps you predictably upskill talent so you know exactly when they’re ready to tackle that new project. Useful metrics will include strengths, weaknesses, and areas of opportunity.

When you think about it, the ROI of your tech stack is only as strong as your team members operating on it. You’ll need to use the insights from multiple skill assessments to paint a picture of broader skill coverage, ideally from the individual, team, and organization level.

Dashboards

Yes, dashboards are all the rage and will continue to be so, for good reason. A well-designed dashboard helps you cleanly separate signals from the noise.

Ideally your dashboard to monitor organization readiness will contain the following items:

  • ability to assign assessments
  • current skill levels
  • an organization of teams that effectively mirrors your internal organization

These seem like simple things, but they often get overlooked. It’s key to focus on these tenets because while the many programs roll out in tandem, it will be easy to get overwhelmed with too much information.

Cloud-first: four key goals to keep in mind

It bears repeating that there will be a lot going on in your organization. This readiness and learning stage of a cloud migration is going to set you up for success as you adopt a cloud-first mindset. 

The new mentality at your organization will consist of the following key understandings, which will eventually become part of the healthy baseline culture.

  1. Being aware of cloud tools and services
  2. Being able to use cloud services effectively, economically, and safely
  3. Understanding how to apply cloud tools to solve customer problems
  4. Being able to use your cloud services together to create new products and solutions

Conclusion

This is exciting stuff — changes, new technology, opportunity for growth. In part 4 of our series, we’ll delve into maintaining that cloud-first mentality, especially when the going gets tough and real-life challenges invariably pop up to get in your way.

Ready, Set…Cloud!

If you’d like a preview of what our blog series will cover in a more in-depth fashion, this guide is a great start. We share some best practices and insights gained from our experience helping many organizations on their journey to cloud success. Use it as a helpful reminder to stay on track.

5-Steps-Cloud-Migration

The post Cloud Migration Series (Step 3 of 5): Assess Readiness appeared first on Cloud Academy.

]]>
0
Cloud Migration Series (Step 2 of 5): Start Planning https://cloudacademy.com/blog/cloud-migration-series-step-2-of-5-start-planning/ https://cloudacademy.com/blog/cloud-migration-series-step-2-of-5-start-planning/#respond Thu, 06 May 2021 14:14:16 +0000 https://cloudacademy.com/?p=46292 Be sure to subscribe to our blog to be notified when new content goes live! Start planning your cloud migration You’ve defined your cloud strategy. You understand why you want to take a journey to the cloud. Now that the high-level strategy is done, let’s focus on the nitty-gritty and...

The post Cloud Migration Series (Step 2 of 5): Start Planning appeared first on Cloud Academy.

]]>
This is part 2 of a 5-part series on best practices for enterprise cloud migration. Released weekly from the end of April to the end of May 2021, each article will cover a new phase of a business’s transition to the cloud, what to be on the lookout for, and how to ensure the journey is a success.

Be sure to subscribe to our blog to be notified when new content goes live!

Start planning your cloud migration

You’ve defined your cloud strategy. You understand why you want to take a journey to the cloud. Now that the high-level strategy is done, let’s focus on the nitty-gritty and the details. It’s time to make a plan for making the big change. Don’t get discouraged, as it’s going to be a lot of work. That’s why we’ve created this series to help you.

Budget for the journey

You’re going to have to sit down and have a realistic conversation with your technical leads and executives about how much a cloud migration is going to cost. Let’s be clear, a migration from legacy systems is not like flipping a switch, nor is it a one-time affair. Rather, like most change that lasts, it’s a process with milestones. 

As you go through the continuum of change, you’ll eventually leave some or all of your legacy systems behind (depending on your views on hybrid cloud). But as that is happening, you’ll need to maintain old systems, scale new systems, and make sure you have the right employee talent to keep things progressing forward. 

A few key facets to consider:

  • Current budget and fiscal year considerations
  • Ideal timeframe for transition
  • Product roadmap, short- and long-term

Check your infrastructure

Before you dive in and take the whole organization with you, let’s take a look at what’s being developed in the various groups in your business. Some of these might be more translatable to the cloud, such as lightweight mobile apps. But remember, there are so many cool technologies and buzzwords out there (Kubernetes, agile, data lakes, real-time everything) — you have to think hard about whether there’s really a business case to jumping in. 

For example, maybe you run a monolithic app that’s been your bread and butter for years. Maybe it’s worth it to host the app in the cloud, but not change to microservices. Instead, since you’ve taken a good look at your product roadmap you might spin off a new product that can then benefit from some quick cloud-based solutions, such as turnkey managed services like AI libraries or data analysis.

Initial organizational alignment

The last thing you want is for people to leave meetings about your migration and have nothing happen. Fast forward six months and progress has been scattershot, not much morale, and zero inertia. But let’s not focus on the negative…what are actionable steps to take to guarantee forward movement?

In part one we talked about organizational buy-in. A good way to maintain internal accountability is to create a multi-disciplinary “Tiger Team” — a group of individuals who can meet to maintain focus and ensure that separate groups don’t become too siloed.

Modify this to your own needs and don’t bog people down with fluff, but do hold them accountable, whether it’s by unintrusive meetings or reports to leadership. Remember, this effort has to be supported from above, as a positive culture starting from the top can be contagious. 

Assess your team

How are you going to get your team from point A to point B? Will they be ready to not just take steps toward a migration, but toward creating products in a new way?

Your product, engineering, and IT teams are experienced in designing, creating, testing, and deploying monolithic applications. They probably have some experience in cloud technology already, and it’s more than certain that they have a deep curiosity to learn more: that’s part of the creator’s mindset and that’s why they’re in this field.

Now you need to establish a baseline for where their skills are, how that aligns with your strategy and planning, and how to raise their skills to meet your strategy.

Develop a skills readiness plan

If you want to create an effective plan for your employees’ technical growth, you need a good way to assess their skills and develop them at scale. We’ll review this more in part three of this series, but here’s an overview of how it’s done.

Start by accurately pinpointing baseline skills

  • Test competence across multiple cloud platforms and technologies and track skill improvement
  • Test practical, hands-on tech skills
  • Streamline the assessment process with automated reminders
  • Understand where your team stands and how fast they’re growing

Quickly increase technical capabilities

  • Drive skill growth with hands-on cloud training programs built to master AWS, Azure, Google Cloud, DevOps
  • Build and assign training plans with 10,000+ hours of up-to-date cloud training
  • Keep your team accountable with built-in reminders and weekly reports
  • Track progress and completion on a real-time dashboard

Confidently know when your team is ready

  • Measure practical expertise through skill reports based on hands-on assessments.
  • Challenge your team with lab scenarios using actual AWS, Azure, and Google Cloud accounts
  • Establish a data-driven approach to learning and skills management
  • Understand your team’s strengths and identify skill gaps

Cloud adoption plan

Migrating to the cloud isn’t just about technology. The mindset and the repeated, tactful reminders to stay on course help to make all the difference.

You’ll find that one of the main challenges with transformation projects is keeping a clear sense of direction. Often with a transformation project, there isn’t a dedicated project resource to run it. That means it’s easy for people on the ground to lose focus a bit and end up working in silos or vacuums. Confusion can set in and the wheels can quickly fall off — everyone loses interest and inertia. 

What can make a significant difference is when learning and development have a clear program structure to drive the behavioral outcomes that leadership wants to see. This can be the backbone to build your cloud adoption plan on.

On top of this framework, you can start to build the basics of how to use cloud services both securely and efficiently. Then you will layer the most important factor on top: your people. Your people will use the framework to both increase their skills and collaborate with new (and sometimes scary) tools and technologies.

Conclusion

Now you have a plan for how to get your arms around this whole digital transformation. Next, we’ll dig deeper into your people and how to assess readiness for your team. You’ll learn about what to look for and how to know you’re all set for your cloud migration and for whatever technical projects you choose in the future.

Ready, Set…Cloud!

If you’d like a preview of what our blog series will cover in a more in-depth fashion, this guide is a great start. We share some best practices and insights gained from our experience helping many organizations on their journey to cloud success. Use it as a helpful reminder to stay on track.

5-Steps-Cloud-Migration

The post Cloud Migration Series (Step 2 of 5): Start Planning appeared first on Cloud Academy.

]]>
0
Cloud Migration Series (Step 1 of 5): Define Your Strategy https://cloudacademy.com/blog/cloud-migration-1-define-your-strategy/ https://cloudacademy.com/blog/cloud-migration-1-define-your-strategy/#respond Thu, 29 Apr 2021 04:00:18 +0000 https://cloudacademy.com/?p=46267 If you’ve already locked in your strategy, have a look at what you should do next. Cloud Migration Series (Step 2 of 5): Start Planning Cloud Migration Series (Step 3 of 5): Assess Readiness Cloud Migration Series (Step 4 of 5): Adopt a Cloud-First Mindset Cloud Migration Series (Step 5...

The post Cloud Migration Series (Step 1 of 5): Define Your Strategy appeared first on Cloud Academy.

]]>
This is part 1 of a 5-part series on best practices for enterprise cloud migration. Released weekly from the end of April to the end of May 2021, each article will cover a new phase of a business’s transition to the cloud, what to be on the lookout for, and how to ensure the journey is a success.

If you’ve already locked in your strategy, have a look at what you should do next.

Be sure to subscribe to our blog to be notified when new content goes live!

Getting Started

Cloud migration is the process of migrating IT components (data, applications, systems) from on-premises to the cloud, or from one cloud platform to another. Modern enterprises have embraced cloud computing for its superior speed and agility, cost savings, and always up-to-date, automated software releases. In fact, according to a survey conducted by Statista, about 50% of all corporate data is now stored in the cloud.  

The evolution of technology, i.e., big data, machine learning, artificial intelligence, and the Internet of Things (IoT), has played a major role in enterprises making the shift to the cloud. At the same time, external factors like the COVID-19 pandemic, which forced companies to operate and onboard new employees in remote environments, have facilitated the further acceleration of cloud adoption. Gartner predicts worldwide public cloud spending to grow by 18.4% this year.

For those just getting started, you may be further along than you think. There’s a good chance that you already use the cloud in day-to-day operations without even realizing it. We’d bet that your email provider, file storage, and CRM are cloud-hosted applications — to name a few. But the cloud offers a lot more than that. Think about the possibilities of auto-scaling to meet any customer demand across the globe, or leveraging containers and microservices to modularize your applications, keeping your products running with high availability. These are just some of the benefits you’ll be able to take advantage of once you’re well on your journey.

Like any business transformation, getting started with cloud migration is often the most difficult and daunting challenge. There’s a lot to consider, which is why defining your strategy is a critical, yet often overlooked or underdeveloped, first step. Let’s dive in.

Identify Cloud Migration Goals

Most people are familiar with the generic benefits of cloud computing, but envisioning (and executing upon) them for your own organization is an entirely different story. Every business’s IT infrastructure, processes, and regulations are unique. And cloud value is perceived differently depending on industry and operating model.   

Before any steps are taken toward migrating to the cloud, tech teams must first understand how such a move fits into the business’s overall strategy. Are there existing problems that could be fixed through cloud adoption? Would moving certain processes to the cloud save costs? How can the cloud further enable innovation?

Defining concrete goals based on KPIs that are relevant to business objectives will lay the foundation for all future initiatives. After all, if you can’t measure success, what’s the point of investing in the first place? Here are some topline ideas to mull over when thinking about your goals:

  • Reinforcing business continuity plans
  • Reducing costs and avoiding vendor lock-in
  • Improving execution on your product roadmap
  • Delivering better customer support or user experience
  • Increasing revenues as a result of improved customer retention

Gaining Organizational Buy-in

It was intentional when we said “business transformation” instead of “IT transformation” earlier. In many cases, leadership leaves these kinds of decisions to technology teams. But when it comes to a cloud migration effort, an all-hands-on-deck approach is required — and that starts at the top.

Let’s think for a moment about the benefits associated with moving to the cloud. From improving flexibility to saving costs and streamlining the customer experience, it’s logical to connect your business goals with your digital transformation activities. Leverage the expertise of enterprise architects to analyze applications, identify potential quick wins, and develop a best-case proposal for migrating on a larger scale. 

By performing an analysis of the application portfolio and communicating anticipated benefits to the business, technology leaders can structure a measured approach to cloud adoption that is more likely to get backing from executives. From there, you can communicate next steps and value to affected parts of the organization. Strategy = defined!

Part 2 of this blog series will discuss what you must do during the planning phase.

Ready, Set…Cloud!

If you’d like a preview of what our blog series will cover in a more in-depth fashion, this guide is a great start. We share some best practices and insights gained from our experience helping many organizations on their journey to cloud success. Use it as a helpful reminder to stay on track.

5-Steps-Cloud-Migration

The post Cloud Migration Series (Step 1 of 5): Define Your Strategy appeared first on Cloud Academy.

]]>
0
AWS Machine Learning Labs and Certification Preparation https://cloudacademy.com/blog/aws-machine-learning-labs-and-certification-preparation/ https://cloudacademy.com/blog/aws-machine-learning-labs-and-certification-preparation/#respond Wed, 14 Apr 2021 12:55:53 +0000 https://cloudacademy.com/?p=46179 Cloud technology democratizes so many things, not the least of which is the opportunity to experiment and learn. Take Machine Learning (ML), for instance. There are so many ways to learn about it and experiment with it. It used to be that you had to create your own algorithms from...

The post AWS Machine Learning Labs and Certification Preparation appeared first on Cloud Academy.

]]>
Are you trying to dig deep into AWS Machine Learning but don’t know where to start? Let’s talk about how you can do that with Cloud Academy.

Cloud technology democratizes so many things, not the least of which is the opportunity to experiment and learn. Take Machine Learning (ML), for instance. There are so many ways to learn about it and experiment with it. It used to be that you had to create your own algorithms from scratch or look for open source repositories to fork and repurpose.

For a while now, we’ve also had managed services that allow you to pick and choose models and data sets. This allows you to get started faster. That’s a great advantage, but like everything, it comes at a cost — a per-hour or sometimes per-minute cost! If you want to learn quickly, it can be overwhelming to…

  • Make a goal
  • Choose a service
  • Get fluent enough in a service to achieve that goal
  • Stay on top of your usage so your costs don’t spiral out of control

How can you get straightforward Machine Learning guidance?

It’s not always easy to find that on the web — that’s why we’ve created tons of resources on AWS Machine Learning, including our learning path: AWS Machine Learning – Specialty Certification Preparation. This is what we do!

Certification learning paths make it easier and more efficient for you and your team to learn and to prove you have some of the most in-demand skills in today’s marketplace. As Gartner states, “By 2022, public cloud services will be essential for 90% of data and analytics innovation.”

Our structured AWS Machine Learning – Specialty cert prep will help you:

  • Get focused: You get a path to your milestones
  • Get visibility: You get tracking so you can clearly see your start and end points
  • Get awarded: You or your teams earn tangible accomplishments — AWS Certification

Want some of those tangible examples?

Our newest AWS Machine Learning Hands-on Labs are developed by experts with research and real-world field experience. These new labs are focused on the basic machine learning concepts featured in the AWS Machine Learning Certification Prep.

Handling Missing Data

Lab - Handling Missing Data

Evaluating Model Predictions for Regression Models

Lab - Evaluating Model Predictions for Regression Models

Evaluating Binary Classification Models

Lab - Evaluating Binary Classification Models

Testing Your Models in the Real World

Lab - Testing Your Models in the Real World

Want some personal guidance for the Machine Learning certification?

He can’t literally sit in the room with you (for now), but our AWS Certification Specialist, Stephen Cole, explains his insights into the AWS Machine Learning – Specialty exam. Check out Stephen’s observations below, taken from his preview course.

 

New content is always being created

Our teams are constantly creating new courses, labs, and exams for you. See what’s just released and coming soon on our Content Roadmap:

The post AWS Machine Learning Labs and Certification Preparation appeared first on Cloud Academy.

]]>
0
Azure DP-100 Certification: Accelerate your Data Science Career https://cloudacademy.com/blog/azure-dp-100-welcome-to-the-machine/ https://cloudacademy.com/blog/azure-dp-100-welcome-to-the-machine/#respond Fri, 04 Sep 2020 12:39:13 +0000 https://cloudacademy.com/?p=43976 What is DP-100 about? The big picture Let’s say you’re a data scientist — or an aspiring one. You need some tools in your toolbelt. There are all sorts of open-source frameworks you learn such as TensorFlow, PyTorch, and scikit-learn. Now with the Azure DP-100 certification, you can take fluency...

The post Azure DP-100 Certification: Accelerate your Data Science Career appeared first on Cloud Academy.

]]>
What is DP-100 about?

The big picture

Let’s say you’re a data scientist — or an aspiring one. You need some tools in your toolbelt. There are all sorts of open-source frameworks you learn such as TensorFlow, PyTorch, and scikit-learn. Now with the Azure DP-100 certification, you can take fluency with these frameworks and demonstrate that you have knowledge of the Azure Machine Learning service.

Illustration from Cloud Academy DP-100 intro course

Azure Machine Learning is Azure’s in-house ML solution, made to be a comprehensive solution to get your machine learning-powered model deployed as fast as possible. It empowers you by integrating everything you need to release models on Azure environments and the particular idiosyncrasies related to Microsoft’s cloud.

The Cloud Academy DP-100 Learning Path

What you’ll learn about in the DP-100 Learning Path

Cloud Academy’s Learning Paths are carefully designed learning experiences to guide you through everything you need to achieve a certain goal. In this DP-100 Learning Path, your main goals will be:

  • How to create an Azure Machine Learning workspace
  • How to train a machine learning model using the drag-and-drop interface
  • How to deploy a trained model to make predictions based on new data

How does this fit in a career path?

Let’s put this particular Azure certification in context with the rest of the Azure cert family. The DP-100 is one of the newer certs, and it may not be as widely known as the big ones such as the Azure Administrator (AZ-104), Azure Developer (AZ-204), or Azure Solutions Architect (AZ-303 & AZ-304).

While it’s not as well known, being a niche topic does give you the opportunity to specialize in a unique way — to showcase marketable skills that the DP-100 cert attests. Don’t forget: Azure may be #2 in cloud market share, but it’s growing each year, with 18% of the market in Q1 2020

Why did Cloud Academy make the DP-100 Learning Path?

That’s easy to answer: Our users requested this cert. A lot.

And while getting a single certification is not the single end goal for your career, the DP-100 is a powerful tool in your arsenal to show you can step outside the AWS bubble and help organizations leverage machine learning technology. Whether you’re an employee or a consultant, it’s a great tool that can set you apart. 

What makes the DP-100 Learning Path special?

Focus, focus, focus. We give you all the vital info you need.

This is a hard certification, but we guide you through mastery of deploying a machine learning model both via the drag-and-drop interface and the Software Development Kit (SDK).

The learning path courses and labs cover three main sections:

  1. The drag-and-drop interface
    Get info on training and deploying a model, setting up data stores and data sets, and adding custom code.
  2. The machine learning SDK
    Learn about running experiments, creating training sets, and deploying models all with the power and flexibility (and challenge) of using the machine learning SDK.
  3. AutoML and HyperDrive
    Learn how to use Azure’s Automated Machine Learning service to discover the most optimal model for a dataset, as well as learn how to use HyperDrive to help automate hyperparameter tuning. 

Our vision for Learning Paths

We’re passionate about and dedicated to giving you the most complete and up-to-date learning experiences. This DP-100 certification learning path is not set in stone; it’s a living, breathing entity that is constantly updated and added to by our experts. Not to mention, if you want to keep up with changes in the Azure space (and cloud in general), you need to prioritize staying current as one of your main objectives.

Our course author maintains an always up-to-date set of Recommended Reading — supplementary information that is important to consider as you approach your test date.

DP-100 Recommended Reading

We want you to succeed and with additions such as these, we provide all you need to be prepared for and pass the DP-100 certification exam.

The post Azure DP-100 Certification: Accelerate your Data Science Career appeared first on Cloud Academy.

]]>
0