AWS re:Invent | Cloud Academy Blog https://cloudacademy.com/blog/category/aws-reinvent/ Wed, 17 Jan 2024 13:13:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 New AWS re:Invent Announcements: Dr. Werner Vogels Keynote https://cloudacademy.com/blog/aws-reinvent-2023-dr-werner-vogels-keynote/ https://cloudacademy.com/blog/aws-reinvent-2023-dr-werner-vogels-keynote/#respond Fri, 01 Dec 2023 18:31:00 +0000 https://cloudacademy.com/?p=57106 AWS re:Invent 2023 is nearing the end. This year’s Keynote by Dr. Werner Vogels as usual, did not disappoint, you’ll see why in a minute but real quick before we get into it: If you are looking for lots of exciting new announcements this isn’t the one to watch.  After...

The post New AWS re:Invent Announcements: Dr. Werner Vogels Keynote appeared first on Cloud Academy.

]]>
AWS re:Invent 2023 is nearing the end. This year’s Keynote by Dr. Werner Vogels as usual, did not disappoint, you’ll see why in a minute but real quick before we get into it: If you are looking for lots of exciting new announcements this isn’t the one to watch. 

After a, now traditional, “The Matrix” introduction and overall theme, Werner went into the topic of cost management and he went deep! I highly recommend this keynote to those old-school IT professionals with software development or data center management experience. You’re in for a treat!

Alright, let’s get into the details:

At this point I didn’t know the entire presentation was going to be centered around cost-management in the cloud but I was intrigued by the book “The Frugal Architect” that he kept referring to, which is a book about designing applications that use resources efficiently to save computing power, memory and in turn: operational expenses. A quick Amazon search revealed: such a book does not exist. More on that later.

Once it was  clear that his entire presentation was going to be around this topic, it all started falling into place. He started to hit specific points and then expanded on those. Here’s a taste.

Align cost to Business

I really loved this point. In the AWS world, we can get super excited about features: high-availability, auto scaling and serverless. 

But we should never forget that if our company’s profit depends on low-cost computing, then perhaps we shouldn’t be under utilizing a super-expensive 4xlarge EC2 instance if we could be doing the same job with a group of smaller, spot instances.

This may not be evident at first, but as the business grows you really don’t want surprises in terms of expenses that directly affect the company’s revenue.

This is something that I already do , due to my Software Development background: Keep costs in mind and by ‘costs’, I mean everything: CPU cycles, storage, number of servers, and so on.

I agree with Werner that Amazon Web Services is an amazing service for all your computing needs, just don’t let that monthly bill run away from you by accepting defaults or wasting resources.

Observability

One of his points was that an application that isn’t tracked and measured will incur in hidden or unexpected costs and this point was a nice segway to introduce CloudWatch Applications signals, a new feature to track application-specific cost and usage.

Languages

At one point, he was very specific about programming languages and their overall footprint and impact in the speed of our code.  Faster, more efficient languages lead to better code that can get the job done faster. He went as far as saying we should be coding in Rust. This is due to its efficiency and speed. I could argue against this:

Granted, Python, Java and .NET Languages are quite heavy due to their underlying support platform — making them pointless for short, transactional programs. But, he failed to account for Development Costs, long-term maintenance and Time-to-Market.  Finding Python and Java developers is quite simple as these are popular languages all over the world. Finding Rust developers? not so sure about this one.

Of course, if we shift our focus back to his point: Operational cost.

A program in Rust, C or C++ that can run in a 100 milliseconds will always outperform the same program written in Python, Java or C# simply because of the super-long load time of the environment itself. So, he is 100% correct in terms of cost savings and sustainability.

He also touched on the phrase “but, we’ve always done things this way…”, trying to say that we shouldn’t be afraid of a new programming language or technology to get the job done in a much more efficient and sustainable way. While I agree with this, not all businesses can afford to transform their Senior Python developers into Junior Rust developers while expecting the same level of output from them, so, your mileage may vary!

Gen AI

When we got to this part of the conversation, I thought “Oh boy, here we go!” and I was expecting the conversation to tangent wildly into language models, image generation, Amazon Q and so on, but no! It was the complete opposite of what I had in mind.

Instead, he showed us use-cases of traditional AI (Machine Learning, SageMaker, Vision) to solve real-world problems, such as interpreting radiology scans, correctly identifying grains of rice for germination and analyzing image data to find and help victims of child abuse.  

By the way, about that software that checks those x-rays images, Dr Vogels has a background in the health industry before making the move to technology, so, he wrote the initial code himself using Python before it was delegated. This code is now open source and much more feature-rich. 

Even in this part of the conversation he stayed traditional as opposed to jumping in the bandwagon of Generative AI. I love it!

Although, not gonna lie: I am a huge advocate of using the Cloud Development Kit and he happened to mention that there are new constructs available, specific to GenAI to help us quickly deploy these solutions for our own, custom needs.

AI predicts, humans decide

He also emphasized that “AI Predicts, but ultimately humans make the decisions”, implying that machines aren’t going to take our jobs, replace our doctors or grow food for us, but they can certainly assist us to help keep up with an ever-growing population. 

As part of his closing argument, he recommends reading his short ebook, The Frugal Architect to help us remember the main points of his conversation.

To wrap this up: It was great! It was certainly geared at old-timers from the very start. In fact, in the first minute he looked at a screen and said “is that a PERL script?”, I couldn’t help but laugh out loud at this one.

Even after the close it was still hilarious: “Hey Werner,  Can I scan my container builds for vulnerabilities in my CI/CD pipeline?” “You can now!” — nice way to sneak in one more new feature which I will certainly look into right away, since I am a DevOps guy.

Now, go build something!

Useful resources from this presentation

https://thefrugalarchitect.com/

In the end he casually dropped this ebook that he wrote, which summarizes the same bullet points that he hit during the presentation. this information is great regardless of cloud computing or not. So, even if you’re not in the cloud yet, you should check it out.

By the way, it is a really short read, so, I highly recommend you take a few minutes of your time and go check it out right now!

CDK

Gen AI constructs for the CDK

This is the new set of constructs that I mentioned, if you are in need of deploying custom, generative AI solutions in a hurry, you should seriously take a look at this: https://github.com/awslabs/generative-ai-cdk-constructs

The post New AWS re:Invent Announcements: Dr. Werner Vogels Keynote appeared first on Cloud Academy.

]]>
0
New AWS re:Invent Announcements: AWS Partner Keynote with Dr Ruba Borno https://cloudacademy.com/blog/aws-reinvent-2023-dr-ruba-borno-keynote/ https://cloudacademy.com/blog/aws-reinvent-2023-dr-ruba-borno-keynote/#respond Fri, 01 Dec 2023 10:25:29 +0000 https://cloudacademy.com/?p=57072 We’re more than halfway through AWS re:Invent 2023 and have had a remarkable amount of new services and features launched every day this week. If you haven’t been keeping up with the keynotes at re:Invent and want the highlights of each, I’d encourage you to read our team’s blog posts...

The post New AWS re:Invent Announcements: AWS Partner Keynote with Dr Ruba Borno appeared first on Cloud Academy.

]]>
We’re more than halfway through AWS re:Invent 2023 and have had a remarkable amount of new services and features launched every day this week. If you haven’t been keeping up with the keynotes at re:Invent and want the highlights of each, I’d encourage you to read our team’s blog posts covering each keynote of the week: 

To wrap up Wednesday’s announcements, we had a keynote delivered by Dr. Ruba Borno, ​​Vice President of AWS Worldwide Channels and Alliances. This keynote was primarily focused on the relationship between AWS and AWS Partners and how they can deliver value to their shared customers. Throughout the keynote, we not only had new announcements, but tons of great use cases and stories from partnerships between AWS, AWS Partners, and customers. 

The theme of the keynote was to help customers make the impossible seem possible by working with the right AWS Partner.

Let’s get into some of the new launches in the AWS Partner Space.

AWS Customer Engagement Incentive

There are so many new opportunities and prospects that have yet to move to AWS. This tool is designed to better address those opportunities. The AWS Customer Engagement Incentive will help AWS Partners engage companies that are new to AWS or in the early stage of adoption. 

Not only does it offset the cost and funds of every part of the sale cycle, it provides a simple global framework that Partners can use to focus on the initial customer workload, better drive the growth and spend of new AWS customers, and scale over time. 

It’s worth mentioning that this incentive is not a new announcement for re:Invent, however, it was launched earlier this year, which is new enough to still get a spot on my blog post. Since it was introduced, Partners have launched 87% of eligible opportunities in the pipeline.

The Generative AI Center of Excellence for AWS Partners

The pace of innovation in the generative AI space has been dizzying. It can be difficult to keep up with the rapid development of these tools. On top of that, you have a huge demand from customers who want to transform their businesses by implementing generative AI into their own technology stacks. 

AWS hopes The Generative AI Center of Excellence for AWS Partners will make it easier for Partners to keep up with the generative AI space to better serve customer needs. It provides both technical and non-technical enablement content and training, example customer use cases, forums for knowledge sharing, and best practices. 

With the center of excellence, AWS Partners can: 

  • Familiarize themselves with a wide range of generative AI offerings
  • Connect with leaders across the Amazon Partner Network on specialized generative AI applications
  • Build applications that go beyond optimization and understand considerations like fairness, privacy, and safety
  • Leverage data-driven insights and tools to accelerate customers’ generative AI journeys

Accenture and AWS Expanded Partnership

Accenture has consistently been an early adopter to the latest AWS technology. For example, when Amazon CodeWhisperer launched, Accenture adopted the technology and saw a 30% boost in performance in their development efforts. 

Accenture has announced that they will continue integrating AWS AI services into Accenture’s automation platforms. To scale this, Accenture has committed to training 50,000 development engineers on AWS AI services, such as Amazon Q and Amazon CodeWhisperer. 

Accenture is just one of the many companies that will contribute to content in The AWS Generative AI center for Excellence for AWS Partners, among other leaders in the space such as Anthropic, NVIDIA, and more. 

Additionally, Accenture also has its own Center for Advanced AI, which also offers accelerators, best practices, and hands-on training to better help their clients maximize the value of Amazon Q and other AWS AI services.

C3 Generative AI: AWS Marketplace Edition

C3 AI announced the availability of C3 Generative AI on the AWS Marketplace. This is a no-code, self-service environment that provides a fully functional generative AI application. Users can use this application to get better insight into their data, and start asking questions about their data in just minutes of setup time. 

In fact, not only is it functional, it also takes care of some of those pesky problems that large language models often experience. C3 AI claims that there are no hallucinations, no cyber security risks, and no IP liability problems. 

If you’re interested in potentially using this tool, you can use the QR code located in the photo above to join the private preview.

ServiceNow and AWS Expanded Partnership

ServiceNow and AWS have been brainstorming about how to bring AWS’ reach, data, and scalability and combine it with ServiceNow’s intelligent platform for digital transformation. 

In January 2024, ServiceNow and its full suite of solutions will be available as a Software-as-a-Service (SaaS) offering in the AWS Marketplace. 

Additionally, AWS and ServiceNow are teaming up to launch industry-specific AI tooling to list in the AWS Marketplace. For example, the two companies are currently working on a supply chain solution by integrating AWS Supply Chain with ServiceNow to better streamline supply chain management.

New AWS Partner Competencies

The AWS Competency Partner Program is a way for customers to validate that AWS Partners have the appropriate skills and technical expertise to to help customers in a specific area. AWS has released three new areas of AWS Competency specialization, including:  

  • The AWS Resilience Competency Partners, which helps customers improve the availability and resilience of their workloads. 
  • The Cyber Insurance Competency Partners, so that customers can find policies from insurers. AWS customers get a quote for the cyber insurance coverage they need within 2 business days.
  • The Built-in Competency Partner Solutions, that provides an infrastructure as code solution to install, configure, and integrate Partner software with foundational AWS services.

AWS Marketplace SaaS Quick Launch

SaaS Quick Launch is a new feature in the AWS Marketplace which makes it easy for customers to quickly configure, deploy, and launch SaaS products on AWS. It does this by using CloudFormation templates that are defined by the software vendor and validated by both the software vendor and AWS to ensure the software adheres to the latest security standards. 

In the AWS Marketplace, you can find the SaaS products that use this feature by looking for the “Quick Launch” tag in the product description. Once you click on Quick Launch for the product of your choice, you’ll be able to easily deploy the software. This feature is generally available, so keep an eye out for the Quick Launch tag in the AWS Marketplace.

AWS Marketplace APIs for Sellers

By using AWS Marketplace APIs for Sellers, you can now enable AWS Marketplace access through your current applications. Using your existing applications, you can build AWS Marketplace products, offer, resale authorization, and agreements workflows directly into your own systems. This leads to a greater efficiency for the Partner, and can also encourage customers to make Marketplace purchases through your domain. This feature is generally available and ready to use now.

Pricing Adjustments to AWS Marketplace

AWS is lowering pricing for the AWS Marketplace effective January 2024. The new pricing model will be a flat fee structure of 3 percent. In some cases, it may be even lower. 

Here are the updates to the pricing model: 

  • For private offers under a million, it’s now a 3% fee. 
  • For deals between $1 million and $10 million, it’s now a 2% fee. 
  • For private offers greater than $10 million, it’s now a 1.5% fee. 
  • For renewal fees for private software and data, it’s now a 1.5% fee.

AWS and Salesforce Partnership Expansion

AWS and Salesforce have announced an expansion to their partnership so that customers can better benefit from the combined value of both products. 

As part of the announcement, the companies released the following information: 

  • Salesforce will begin offering their products on the AWS Marketplace, including products such as Data Cloud, Service Cloud, Heroku, Tableau, Mulesoft, and many more. This is available now with expanded product support planned for 2024. 
  • AWS and Salesforce are making Data Cloud a SaaS offering. This enables customers to use AWS services to process and analyze data stored in Data Cloud.

AWS Partner Central Enhancements

Recently, a new AWS Partner Central experience was launched. The new enhancements personalize the experience for the AWS Partner and the roles in their business. This includes automated prescriptive guidance with tasks and next best actions customized for the Partner. 

For AWS Partners enrolled in the software path, AWS is also providing an integrating experience between AWS Partner Central and AWS Marketplace. This means that AWS Partners can create AWS Marketplace listings from AWS Partner Central. By linking your AWS Partner Central and AWS Marketplace accounts and users, it also enables you to see AWS Marketplace analytics in your Partner Analytics Dashboard. 

Finally, they’ve introduced a new co-sell experience through APN Customer Engagements (ACE) that is customized towards the AWS Partner and their co-sell needs. This enables Partners to better maintain pipeline hygiene, by prioritizing opportunities where AWS sales support is needed, providing insight on net new engagements, and stardizing the information gathered in the Partner referral and AWS referral experience. 

Further, Partner ACE pipeline is now integrated with AWS Marketplace Private Offers, providing real time sales insights and the ability to unlock new growth opportunities more easily.

AWS Partner CRM Connector Now Supports AWS Marketplace

The AWS Partner CRM Connector now supports AWS Marketplace, better enabling Partners to manage and publish Marketplace private offers and resale authorizations. The CRM Connector also supports AWS Partner Central ACE Pipeline Manager capabilities, including an enhanced feature to view a summary of AWS Marketplace private offers.

The AWS Partner CRM Connector is available at no-cost on the Salesforce AppExchange.

AWS Sustainability Goals

Finally, we ended the keynote with a talk on sustainability. It’s no secret that AWS is committed to sustainability, with plans to be net zero carbon by 2040, water positive by 2030, and 100% renewable energy-powered by 2025. 

Sustainability goals are not just top of mind for AWS, they’re becoming increasingly important to customers as well. In fact, 1 in 3 customers say that sustainability is a key priority for their business. To meet this customer need, AWS Partners have created over 1,000 sustainability solutions hosted in the AWS Solutions Library and AWS Marketplace. 

Finally, AWS has encouraged AWS Partners to join The Climate Pledge. Over 450 Companies have joined, and the number is expected to grow.

That brings us to the end of this keynote. The AWS Partner space has been innovating at a significant pace this year. Hopefully, these improvements make the lives of AWS Partners much easier. Enjoy the rest of the week!

The post New AWS re:Invent Announcements: AWS Partner Keynote with Dr Ruba Borno appeared first on Cloud Academy.

]]>
0
New AWS re:Invent Announcements: Swami Sivasubramanian Keynote https://cloudacademy.com/blog/aws-reinvent-2023-swami-sivasubramanian-keynote/ https://cloudacademy.com/blog/aws-reinvent-2023-swami-sivasubramanian-keynote/#respond Thu, 30 Nov 2023 13:53:03 +0000 https://cloudacademy.com/?p=57018 What an incredible week we’ve already had at re:Invent 2023! If you haven’t checked them out already, I encourage you to read our team’s blog posts covering Monday Night Live with Peter DeSantis and Tuesday’s keynote from Adam Selipsky. Today we heard Dr. Swami Sivasubramanian’s keynote address at re:Invent 2023....

The post New AWS re:Invent Announcements: Swami Sivasubramanian Keynote appeared first on Cloud Academy.

]]>
What an incredible week we’ve already had at re:Invent 2023! If you haven’t checked them out already, I encourage you to read our team’s blog posts covering Monday Night Live with Peter DeSantis and Tuesday’s keynote from Adam Selipsky.

Today we heard Dr. Swami Sivasubramanian’s keynote address at re:Invent 2023. Dr. Sivasubramanian is the Vice President of Data and AI at AWS. Now more than ever, with the recent proliferation of generative AI services and offerings, this space is ripe for innovation and new service releases. Let’s see what this year has in store!

Swami began his keynote by outlining how over 200 years of technological innovation and progress in the fields of mathematical computation, new architectures and algorithms, and new programming languages has led us to this current inflection point with generative AI. He challenged everyone to look at the opportunities that generative AI presents in terms of intelligence augmentation. By combining data with generative AI, together in a symbiotic relationship with human beings, we can accelerate new innovations and unleash our creativity.

Each of today’s announcements can be viewed through the lens of one or more of the core elements of this symbiotic relationship between data, generative AI, and humans. To that end, Swami provided a list of the following essentials for building a generative AI application:

  • Access to a variety of foundation models
  • Private environment to leverage your data
  • Easy-to-use tools to build and deploy applications
  • Purpose-built ML infrastructure

In this post, I will be highlighting the main announcements from Swami’s keynote, including:

  • Support for Anthropic’s Claude 2.1 foundation model in Amazon Bedrock
  • Amazon Titan Multimodal Embeddings, Text models, and Image Generator now available in Amazon Bedrock
  • Amazon SageMaker HyperPod
  • Vector engine for Amazon OpenSearch Serverless
  • Vector search for Amazon DocumentDB (with MongoDB compatibility) and Amazon MemoryDB for Redis
  • Amazon Neptune Analytics
  • Amazon OpenSearch Service zero-ETL integration with Amazon S3
  • AWS Clean Rooms ML
  • New AI capabilities in Amazon Redshift
  • Amazon Q generative SQL in Amazon Redshift
  • Amazon Q data integration in AWS Glue
  • Model Evaluation on Amazon Bedrock

Let’s begin by discussing some of the new foundation models now available in Amazon Bedrock!

Anthropic Claude 2.1

Just last week, Anthropic announced the release of its latest model, Claude 2.1. Today, this model is now available within Amazon Bedrock. It offers significant benefits over prior versions of Claude, including:

  • A 200,000 token context window
  • A 2x reduction in the model hallucination rate
  • A 25% reduction in the cost of prompts and completions on Bedrock

These enhancements help to enhance the reliability and trustworthiness of generative AI applications built on Bedrock. Swami also noted how having access to a variety of foundation models (FMs) is vital and that “no one model will rule them all.” To that end, Bedrock offers support for a broad range of FMs, including Meta’s Llama 2 70B, which was also announced today.

Amazon Titan Multimodal Embeddings, Text models, and Image Generator now available in Amazon Bedrock

Swami introduced the concept of vector embeddings, which are numerical representations of text. These embeddings are critical when customizing and enhancing generative AI applications with things like multimodal search, which could involve a text-based query along with uploaded images, video, or audio. To that end, he introduced Amazon Titan Multimodal Embeddings, which can accept text, images, or a combination of both to provide search, recommendation, and personalization capabilities within generative AI applications. He then demonstrated an example application that leverages multimodal search to assist customers in finding the necessary tools and resources to complete a household remodeling project based on a user’s text input and image-based design choices.

He also announced the general availability of Amazon Titan Text Lite and Amazon Titan Text Express. Titan Text Lite is useful for performing tasks like summarizing text and copywriting, while Titan Text Express can be used for open-ended text generation and conversational chat. Titan Text Express also supports retrieval-augmented generation, or RAG, which is useful when training your own FMs based on your organization’s data.

He then introduced Titan Image Generator and showed how it can be used to both generate new images from scratch and edit existing images based on natural language prompts. Titan Image Generator also supports the responsible use of AI by embedding an invisible watermark within every image it generates indicating that the image was generated by AI.

Amazon SageMaker HyperPod

Swami then moved on to a discussion about the complexities and challenges faced by organizations when training their own FMs. These include needing to break up large datasets into chunks that are then spread across nodes within a training cluster. It’s also necessary to implement checkpoints along the way to guard against data loss from a node failure, adding further delays to an already time and resource-intensive process. SageMaker HyperPod reduces the time required to train FMs by allowing you to split your training data and model across resilient nodes, allowing you to train FMs for months at a time while taking full advantage of your cluster’s compute and network infrastructure, reducing the time required to train models by up to 40%.

Vector engine for Amazon OpenSearch Serverless

Returning to the subject of vectors, Swami explained the need for a strong data foundation that is comprehensive, integrated, and governed when building generative AI applications. In support of this effort, AWS has developed a set of services for your organization’s data foundation that includes investments in storing vectors and data together in an integrated fashion. This allows you to use familiar tools, avoid additional licensing and management requirements, provide a faster experience to end users, and reduce the need for data movement and synchronization. AWS is investing heavily in enabling vector search across all of its services. The first announcement related to this investment is the general availability of the vector engine for Amazon OpenSearch Serverless, which allows you to store and query embeddings directly alongside your business data, enabling more relevant similarity searches while also providing a 20x improvement in queries per second, all without needing to worry about maintaining a separate underlying vector database.

Vector search for Amazon DocumentDB (with MongoDB compatibility) and Amazon MemoryDB for Redis

Vector search capabilities were also announced for Amazon DocumentDB (with MongoDB compatibility) and Amazon MemoryDB for Redis, joining their existing offering of vector search within DynamoDB. These vector search offerings all provide support for both high throughput and high recall, with millisecond response times even at concurrency rates of tens of thousands of queries per second. This level of performance is especially important within applications involving fraud detection or interactive chatbots, where any degree of delay may be costly.

Amazon Neptune Analytics

Staying within the realm of AWS database services, the next announcement centered around Amazon Neptune, a graph database that allows you to represent relationships and connections between data entities. Today’s announcement of the general availability of Amazon Neptune Analytics makes it faster and easier for data scientists to quickly analyze large volumes of data stored within Neptune. Much like the other vector search capabilities mentioned above, Neptune Analytics enables faster vector searching by storing your graph and vector data together. This allows you to find and unlock insights within your graph data up to 80x faster than with existing AWS solutions by analyzing tens of billions of connections within seconds using built-in graph algorithms.

Amazon OpenSearch Service zero-ETL integration with Amazon S3

In addition to enabling vector search across AWS database services, Swami also outlined AWS’ commitment to a “zero-ETL” future, without the need for complicated and expensive extract, transform, and load, or ETL pipeline development. AWS has already announced a number of new zero-ETL integrations this week, including Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service and various zero-ETL integrations with Amazon Redshift. Today, Swami announced another new zero-ETL integration, this time between Amazon OpenSearch Service and Amazon S3. Now available in preview, this integration allows you to seamlessly search, analyze, and visualize your operational data stored in S3, such as VPC Flow Logs and Elastic Load Balancing logs, as well as S3-based data lakes. You’ll also be able to leverage OpenSearch’s out of the box dashboards and visualizations.

AWS Clean Rooms ML

Swami went on to discuss AWS Clean Rooms, which were introduced earlier this year and allow AWS customers to securely collaborate with partners in “clean rooms” that do not require you to copy or share any of your underlying raw data. Today, AWS announced a preview release of AWS Clean Rooms ML, extending the clean rooms paradigm to include collaboration on machine learning models through the use of AWS-managed lookalike models. This allows you to train your own custom models and work with partners without needing to share any of your own raw data. AWS also plans to release a healthcare model for use within Clean Rooms ML within the next few months.

New AI capabilities in Amazon Redshift

The next two announcements both involve Amazon Redshift, beginning with some AI-driven scaling and optimizations in Amazon Redshift Serverless. These enhancements include intelligent auto-scaling for dynamic workloads, which offers proactive scaling based on usage patterns that include the complexity and frequency of your queries along with the size of your data sets. This allows you to focus on deriving important insights from your data rather than worrying about performance tuning your data warehouse. You can set price-performance targets and take advantage of ML-driven tailored optimizations that can do everything from adjusting your compute to modifying the underlying schema of your database, allowing you to optimize for cost, performance, or a balance between the two based on your requirements.

Amazon Q generative SQL in Amazon Redshift

The next Redshift announcement is definitely one of my favorites. Following yesterday’s announcements about Amazon Q, Amazon’s new generative AI-powered assistant that can be tailored to your specific business needs and data, today we learned about Amazon Q generative SQL in Amazon Redshift. Much like the “natural language to code” capabilities of Amazon Q that were unveiled yesterday with Amazon Q Code Transformation, Amazon Q generative SQL in Amazon Redshift allows you to write natural language queries against data that’s stored in Redshift. Amazon Q uses contextual information about your database, its schema, and any query history against your database to generate the necessary SQL queries based on your request. You can even configure Amazon Q to leverage the query history of other users within your AWS account when generating SQL. You can also ask questions of your data, such as “what was the top selling item in October” or “show me the 5 highest rated products in our catalog,” without needing to understand your underlying table structure, schema, or any complicated SQL syntax.

Amazon Q data integration in AWS Glue

One additional Amazon Q-related announcement involved an upcoming data integration in AWS Glue. This promising feature will simplify the process of constructing custom ETL pipelines in scenarios where AWS does not yet offer a zero-ETL integration, leveraging agents for Amazon Bedrock to break down a natural language prompt into a series of tasks. For instance, you could ask Amazon Q to “write a Glue ETL job that reads data from S3, removes all null records, and loads the data into Redshift” and it will handle the rest for you automatically.

Model Evaluation on Amazon Bedrock

Swami’s final announcement circled back to the variety of foundation models that are available within Amazon Bedrock and his earlier assertion that “no one model will rule them all.” Because of this, model evaluations are an important tool that should be performed frequently by generative AI application developers. Today’s preview release of Model Evaluation on Amazon Bedrock allows you to evaluate, compare, and select the best FM for your use case. You can choose to use automatic evaluation based on metrics such as accuracy and toxicity, or human evaluation for things like style and appropriate “brand voice.” Once an evaluation job is complete, Model Evaluation will produce a model evaluation report that contains a summary of metrics detailing the model’s performance.

Swami concluded his keynote by addressing the human element of generative AI and reaffirming his belief that generative AI applications will accelerate human productivity. After all, it is humans who must provide the essential inputs necessary for generative AI applications to be useful and relevant. The symbiotic relationship between data, generative AI, and humans creates longevity, with collaboration strengthening each element over time. He concluded by asserting that humans can leverage data and generative AI to “create a flywheel of success.” With the impending generative AI revolution, human soft skills such as creativity, ethics, and adaptability will be more important than ever. According to a World Economic Forum survey, nearly 75% of companies will adopt generative AI by the year 2027. While generative AI may eliminate the need for some roles, countless new roles and opportunities will no doubt emerge in the years to come.

I entered today’s keynote full of excitement and anticipation, and as usual, Swami did not disappoint. I’ve been thoroughly impressed by the breadth and depth of announcements and new feature releases already this week, and it’s only Wednesday! Keep an eye on our blog for more exciting keynote announcements from re:Invent 2023!

The post New AWS re:Invent Announcements: Swami Sivasubramanian Keynote appeared first on Cloud Academy.

]]>
0
AWS re:Invent 2023: New announcements – Adam Selipsky Keynote https://cloudacademy.com/blog/aws-reinvent-2023-adam-selipsky-keynote/ https://cloudacademy.com/blog/aws-reinvent-2023-adam-selipsky-keynote/#respond Wed, 29 Nov 2023 14:14:43 +0000 https://cloudacademy.com/?p=57012 As both predicted and anticipated, the AWS re:Invent Keynote delivered by AWS CEO Adam Selipsky was packed full of new announcements. This marked the 2nd keynote of re:Invent 2023, with the first delivered by Peter DeSantis. To learn more about what Peter had to say, read here. I think we...

The post AWS re:Invent 2023: New announcements – Adam Selipsky Keynote appeared first on Cloud Academy.

]]>
As both predicted and anticipated, the AWS re:Invent Keynote delivered by AWS CEO Adam Selipsky was packed full of new announcements. This marked the 2nd keynote of re:Invent 2023, with the first delivered by Peter DeSantis. To learn more about what Peter had to say, read here.

I think we were all expecting there to be emphasis on generative AI announcements, and this keynote certainly delivered on this point! With gen AI making up a large quantity of the new announcements, we can clearly see that AWS is taking this technology by the horns and not letting go. 

In this post I want to review and highlight the main announcements which include:

  • Amazon S3 Express One Zone
  • AWS Graviton 4
  • R8g instances for EC2
  • AWS Trainium2
  • Amazon Bedrock customization capabilities
    • Fine Tuning – Cohere Command Light, Meta Llama 2, Amazon Titan Text Lite and Express
    • Amazon Bedrock Retrieval Augmented Generation (RAG) with Knowledge Bases
    • Continued Pre-training for Amazon Titan Text Lite and Express
  • Guardrails for Amazon Bedrock
  • Agents for Amazon Bedrock
  • Amazon Q
  • Amazon Q Code Transformation
  • Amazon Q in Amazon QuickSight
  • Amazon Q in Amazon Connect
  • Amazon DataZone AI recommendations
  • Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service
  • Zero-ETL integrations with Amazon Redshift

So let’s get started from the top, with the first announcement of Amazon S3 Express One Zone.

Amazon S3 Express One Zone

This is a brand new Amazon S3 storage class designed with exceptional performance in mind. It is aimed to deliver single digit millisecond latency for workloads and applications that require such demands, such as AI/ML training, media processing, HPC and more. When compared to the S3 Standard storage class, Express One Zone (EOZ) can bring you savings of up to 50% and increase your performance by 10x. That means you now have a storage solution that seamlessly handles millions of requests per second with unparalleled efficiency, all while maintaining a consistently low, single-digit millisecond latency.  

This storage class will ensure your data is stored within a single availability zone, as expected, and replicate your data multiple times within that AZ maintaining the durability and availability of your data that we have come to expect from Amazon S3. As a part of this new storage class, AWS has also introduced a new bucket type in order to ensure its performance efficiency.  These new ‘directory buckets’ have been created to support thousands of requests per second, as a result this is specific to the EOZ class.

AWS Graviton4

AWS always has a focus on optimizing performance and cost, and this is what drove them to develop the Graviton series of chips for its EC2 compute offerings. This year, we have another Graviton chip to add to the family, AWS Graviton 4. Boasting performance of 96 Neoverse V2 cores, 2 MB of L2 cache per core, and 12 DDR5-5600 channels, this chip–which is now the most powerful and energy efficient chip offered by AWS–gives its customers even more options when it comes to selecting the right compute power. When compared with the performance of its predecessor, Graviton3, the new Graviton4 chip is 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications.

R8g Instances for EC2

This announcement followed on nicely from the release of the new chip, as this 8th generation Rxg EC2 instance type, the R8g, was built using Graviton4. It will be released in a variety of sizes and will contain 3 times as many vCPUs and 3 times as much memory as an R7g EC2 instance, making this the best price performance for memory-optimized workloads. This EC2 instance is ideal for any workloads that are memory-intensive, such as high-resolution streaming video, ML/AI, and real-time analytics. As always, security is a number one priority, and with the R8g being built using AWS Graviton4 processors, it also comes with enhanced security and encryption.

AWS Trainium2

With the final announcement from a chip and EC2 instance perspective, Adam announced the new purpose-built chips for generative AI and ML learning, AWS Trainium2. AWS developed Trainium to improve performance and reduce cost for training workloads. They are heavily used for deep learning, large language models (LLMs), and generative AI models. With the new emphasis of gen AI in the industry, this chip will be great news for a lot of businesses looking to harness the benefits that this technology can bring. This new chip has been optimized for training foundation models with hundreds of billions, or even trillions of parameters, and is up to 4x faster than the previous Trn1 chip.

There were a number of announcements made around the customization capabilities of Amazon Bedrock, which is a service that allows you to access different foundation models (FMs) via API calls, including those curated by AWS in addition to those provided by 3rd parties. These announcements included:

  • Fine Tuning – Cohere Command Light, Meta Llama 2, Amazon Titan Text Lite and Express
  • Amazon Bedrock Retrieval Augmented Generation (RAG) with Knowledge Bases
  • Continued Pre-training for Amazon Titan Text Lite and Express

Fine-Tuning: Creating a Fine-tuned model with Amazon Bedrock provides you with the ability to increase the accuracy of your models by allowing you to provide your own labeled business training data for different tasks. This customization training allows your models to learn the most appropriate response that’s specific to your own organizational tasks.

RAG with Knowledge Bases: RAG is a framework used within AI that enables you to supply additional factual data to a foundation model from an external source to help it generate responses using up-to-date information. A foundation model is only as good as the data that it has been trained on, so if there are irregularities in your responses, you can supplement the model with additional external data which allows the model to have the most recent, reliable and accurate data to work with. Knowledge Bases for Amazon Bedrock is a new feature that simplifies the implementation and management of RAG. It’s a fully-managed capability that manages custom data sources, data ingestion, and prompt augmentation, preventing you from having to implement your own custom integrations.
Continued Pre-training: When creating a Continued Pre-training model, you can use your own unlabeled data to train your model with content and knowledge that does not currently exist in the underlying foundation models. This allows you to use company documents, processes, and other documents that contain organization-specific data to improve the accuracy and effectiveness of your trained model. You also have the ability to add more and more unlabeled data to your model to allow you to retrain it to keep it as relevant and accurate as possible.

Guardrails for Amazon Bedrock

Responsible AI is designed to set out the principles and practices when working with artificial intelligence to ensure that it is adopted, implemented, and executed fairly, lawfully, and ethically, ensuring trust and transparency is given to the business and its customers. Considerations to how AI is used and how it may affect humanity must be governed and controlled by rules and frameworks. Trust, assurance, faith, and confidence should be embedded with any models and applications that are built upon AI.

With this in mind, AWS has released Guardrails for Amazon Bedrock, which has been designed to promote responsible AI when building applications on top of foundation models. Using different customized safeguards you can define topics, categories, and content with different filters, ensuring that only relevant content is presented to your users while protecting them from harmful and unacceptable content. These safeguards can be aligned to your own internal organization or company policies, allowing you to maintain your own principles.

Agents for Amazon Bedrock

Amazon Bedrock Agents allow you to implement and define your own autonomous agents within your gen AI applications to help with task automation. These agents can then be used to facilitate end-users in completing tasks based on your own organizational data. The agents will also manage the different interactions between the end user, data sources, and the underlying foundation models, and can also trigger APIs to reach out to different knowledge bases for more customized responses. When configuring Agents, you can give your Agent specific instructions on what it is designed to do and how to interact. Using advanced prompts, you can provide your Agent with additional instructions at each step of the orchestration of your application.

Amazon Q

There were 4 different ‘Q’ services announced during the Keynote. The first was Amazon Q.  

Amazon Q is a generative AI-powered assistant powered by Amazon Bedrock that is designed and tailored specifically for your own internal business. Using your own company data to form a part of its knowledge base, it can be used to help you resolve queries, solve problems, and take action, all by interacting with Q via a chat prompt. By understanding your business processes, policies and more, it can be used by anyone to help you refine, streamline, and optimize your day to day operations and tasks by getting fast and immediate information found throughout your organization’s documentation and information. Connecting it to company data sources is made easy using 40 built-in connectors that connect to platforms such as Slack, Github, Microsoft Teams, and Jira.

Amazon Q Code Transformation

Falling under the same umbrella as Amazon Q, this is a new AI-assistant offering that will allow application developers to streamline and simplify the daunting tasks that are involved when it comes to upgrading your application code. Application upgrades of code can take days or even weeks depending on how many applications you need to upgrade; however, using Amazon Q Code Transformation, this can be reduced to just minutes. If you are running Java version 8 or 11 application code, then you can use Amazon Q Code Transformation to upgrade your code to Java version 17. AWS is also working on the capability to also upgrade from Windows-based .NET frameworks to cross-platform .NET in the coming months. When automating code upgrades, Code Transformation will analyze your existing code, formulate a plan, update packages and dependencies, deprecate inefficient code elements, and adopt security best practices. Upon completion, you are able to review and accept any changes before deploying the upgrade into your production environment.

Amazon Q in Amazon QuickSight

The 3rd Amazon Q announcement brings you Amazon Q in Amazon QuickSight, providing you a generative AI-powered business intelligence assistant. This makes understanding your data within QuickSight easy to navigate and present. Asking Q to provide, discover, and analyze data gets you the results you need quickly and conveniently thanks to natural language processing of your user input. You can continue to converse with Q in QuickSight, refining your requirements based on the results as if you were having a conversation as it contextualizes your previous requests. This enables anyone to be able to create their own dashboards, collect visuals, and gain actionable insights from your company data without requiring BI teams to perform any data manipulation tasks for you.

Amazon Q in Amazon Connect

The final installment of Amazon Q was Amazon Q in Amazon Connect, which is designed to enhance the experience between contact centers and their customers using large language models (LLMs). LLMs are used by generative AI to generate text based on a series of probabilities, enabling them to predict, identify, and translate consent. They are often used to summarize large blocks of text, to classify text to determine its sentiment, and to create chatbots and AI assistants. This enables Amazon Q in Amazon Connect to detect customer intent and use data sources containing organizational information, such as product manuals or catalog references, to respond with recommended content for the customer support agent to communicate back to the customer. These recommended responses and actions are delivered quickly, helping to reduce the waiting time for the customer and increase customer satisfaction, enhancing the customer experience.

Amazon DataZone AI Recommendations

The final announcement involving generative AI was Amazon DataZone AI recommendations. Now available in preview, this new feature enhances Amazon DataZone’s ability to generate a catalog for your business data that’s stored in S3, RDS, or third-party applications like Salesforce and Snowflake by leveraging generative AI and LLMs within Amazon Bedrock to create meaningful business descriptions for your data and its schemas. Previously, DataZone could only generate table and column names for your business data catalog. This added capability provides additional context by describing the meaning of the fields within your tables and their schemas, helping data scientists and engineers analyze data that may not otherwise have enough metadata to properly clarify its meaning. This promises to streamline the process of data discovery and analysis, making your business data more accessible and easier to understand.

Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service

The final two announcements both involved new zero-ETL integrations with AWS services, the first being Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service. ETL, short for “extract, transform, and load,” refers to the often time-consuming and expensive process of building data pipelines to sanitize and normalize data that may come from many disparate sources in order to perform further analysis on the data or to use it in your AI or ML workloads. This new feature is now generally available and allows you to leverage the power of Amazon OpenSearch Service, including full-text, fuzzy, and vector search, to query data you have stored in DynamoDB without needing to build a costly ETL pipeline first. You can enable this integration directly within the DynamoDB console, which leverages DynamoDB Streams and point-in-time recovery to synchronize data using Amazon OpenSearch Ingestion. You can specify mappings between fields in multiple DynamoDB tables and Amazon OpenSearch Service indexes. According to AWS, data in DynamoDB will be synchronized to your Amazon OpenSearch Service managed cluster or serverless collection within seconds.

Zero-ETL integrations with Amazon Redshift

Last but not least are some additional new zero-ETL integrations, this time with Amazon Redshift. Many organizations leverage Redshift to perform data analytics, but have generally needed to build ETL pipelines to connect data from sources such as Amazon Aurora, RDS, and DynamoDB. With this announcement, customers can now take advantage of new features to replicate their data from the following sources directly into Amazon Redshift:

  • Amazon Aurora MySQL (generally available) – this offering supports both provisioned DB instances as well as Aurora Serverless v2 DB instances, provides near real-time analytics, and is capable of processing over 1 million transactions each minute
  • Amazon Aurora PostgreSQL (preview) – this offers near real-time analytics and machine learning on your data stored in Amazon Aurora PostgreSQL
  • Amazon RDS for MySQL (preview) – this zero-ETL integration performs seamless data replication between RDS for MySQL and Redshift including ongoing change synchronization and schema replication
  • Amazon DynamoDB (limited preview) – this zero-ETL integration offers a fully-managed replication solution between DynamoDB and Redshift without consuming any DynamoDB Read Capacity Units (RCUs)

All of these integrations offer fully-managed solutions for replicating your data into Redshift data warehouses, unlocking the potential for data analysts to gain insight into your business data using analytics queries and machine learning models.

The post AWS re:Invent 2023: New announcements – Adam Selipsky Keynote appeared first on Cloud Academy.

]]>
0
New AWS re:Invent Announcements: Monday Night Live with Peter DeSantis https://cloudacademy.com/blog/aws-reinvent-2023-peter-desantis-keynote/ https://cloudacademy.com/blog/aws-reinvent-2023-peter-desantis-keynote/#respond Tue, 28 Nov 2023 14:54:16 +0000 https://cloudacademy.com/?p=56995 Another year, another re:Invent Monday Night Live. It comes as no surprise that the re:Invent week kicks off with this session, as it highlights some of the most remarkable engineering achievements that push the limits of cloud computing. While we’re all familiar with the AWS console and the services that...

The post New AWS re:Invent Announcements: Monday Night Live with Peter DeSantis appeared first on Cloud Academy.

]]>
Another year, another re:Invent Monday Night Live. It comes as no surprise that the re:Invent week kicks off with this session, as it highlights some of the most remarkable engineering achievements that push the limits of cloud computing. While we’re all familiar with the AWS console and the services that AWS provides, this session goes deeper into the underlying components that underpin these services, focusing on the actual silicon, storage, compute, and networking layers. 

The session was led by Peter DeSantis, the Senior Vice President of AWS Utility Computing. Let’s see what new announcements Peter had in store for us this year!

The keynote primarily centered around the strategy to bring the benefits of serverless computing to the existing “serverful” software in use today. Peter began by discussing all the benefits of serverless computing and how AWS continues to create services that remove the undifferentiated heavy lifting of provisioning, managing, and maintaining servers. It focused on enhancements in database, compute, caching, and data warehouse capabilities.

Amazon Aurora Limitless Database

The first announcement is in the category of database improvements. Currently, tons of customers use Amazon Aurora Serverless for its ability to adjust capacity up and down to support hundreds of thousands of transactions. 

However, for some customers, this scale is simply not enough. Some customers need to process and manage hundreds of millions of transactions and have to shard their database in order to do so. However, a sharded solution is complex to maintain and requires a lot of time and resources. 

This problem got AWS thinking: What would sharding look like in an AWS-managed, serverless world? 

Aurora Limitless Database solves this issue, by enabling users to scale the write capacity of their Aurora database beyond the write capacity of a single server. It does this partly through sharding, enabling users to create a Database Shard Group containing shards that each store a subset of the data stored in a database. They’ve also taken into account transactions, and provide transactional consistency across all shards in the database. 

The best part is that it removes the burden of managing a sharded solution from the customer, while providing the benefits of parallelization. . 

To create Aurora Limitless Database, AWS needed to create a few underlying technologies: 

  • An internal tool called Caspian, which provides resources to shards and enables them to scale up and down as needed. 
  • An improved request routing layer, which handles some of the more difficult orchestration tasks, such as orchestrating complex queries across multiple shards and combining results. 
  • A better solution to synchronize server clocks to create an ordered log of events for the database….which leads us to our next announcement.

Amazon Time Sync Service

If you’re feeling deja-vu, it’s true that Amazon has already released the Amazon Time Sync Service years ago. It’s a high-accuracy timekeeping service powered by redundant satellite-connected and atomic reference clocks in AWS regions. The old version of the service could deliver current time readings of the Coordinated Universal Time (UTC) global standard with 1 millisecond latency. 

The new version, announced today, is an improvement to the already existing service. Now, the service will deliver current time readings of the UTC global standard with microsecond latency. You can take advantage of Amazon Time Sync Service through supported EC2 Instances, enabling you to better increase distributed application transaction speed, more easily order application events, and more! 

If you already use the service on supported instances, you will see clock accuracy improve automatically.

Amazon ElastiCache Serverless

Next on the list is a revolution in the caching category. Historically, caches are not very serverless, as the performance of the cache relies on the memory of the server that hosts it. Additionally, there’s a ton of resource planning that goes into caching data. If your cache is too small, you evict useful data. If your cache is too large, you waste money on memory you don’t need. With a serverless cache, both the infrastructure management and resource planning goes away. 

This is the problem that Amazon ElastiCache Serverless hopes to solve. It’s a serverless option of ElastiCache that enables you to launch a Redis or Memcached caching solution without having to provision, manage, scale, or monitor a fleet of nodes. 

It’s compatible with Redis 7 and Memcached 1.6, it has a median lookup latency of half a millisecond, and it supports up to 5 TB of memory capacity. 

Under the hood of Amazon ElastiCache Serverless is a sharded caching solution. This means that the technologies supporting the service are very similar to the technologies that underpin the Aurora Limitless Database. For example, Amazon ElastiCache Serverless uses:

  • The internal service Caspian to right-size and scale shards up and down.  
  • The improved request routing layer to ensure speed, so that latency isn’t added to cache requests in a distributed system. 

Amazon Redshift Serverless

Next up is data warehouses. The existing Amazon Redshift Serverless service makes it easy to provision and scale data warehouse capacity based on query volume. If all queries are similar, this scaling mechanism works really well. However, in cases where you don’t have uniform queries, there may be times when a large complex query slows down the system and impacts other smaller queries. 

To solve this problem, Amazon created a new Redshift serverless capability to provide AI-driven scaling and optimizations. The idea is to avoid bogging down a Redshift cluster by predicting and updating Redshift capacity based on anticipated query load. It anticipates this load by analyzing each query, taking into account query structure, data size, and other metrics. 

It then uses this query information and determines if the query has been seen before or if it’s a new query. All of this helps Redshift determine the best way to run the query, with efficiency, impact on cluster, and price in mind.

AWS Center for Quantum Computing – Error Correction Improvements

A few years ago, AWS announced the AWS Center for Quantum Computing at the Caltech campus in California. Its primary task is to overcome technical challenges in the quantum computing space… and they certainly have their work cut out for them. 

Currently, the quantum computing space has a lot of technical challenges. One of which is that today’s quantum computers are noisy and prone to error. 

Unlike general purpose computers that have a binary bit that we compose into more complex structures to build computer systems, quantum computers use an underlying component called a qubit that stores more than just a 0 and 1. 

With general purpose computers, a binary bit can lead to bit flip errors. However, these are easily protected against. In the quantum world, you not only have to protect against bit flips, but you also have to protect against phase flips. 

The AWS Center for Quantum Computing is focused on implementing quantum error correction more efficiently, so that these errors happen less frequently. To give you an idea of where we’re at today:  State of the art quantum computers offer 1 error in every 1000 quantum operations. 

In years to come, the AWS team at the Center for Quantum Computing hopes to have just 1 error in every 100 billion quantum operations.

That brings us to the end of the keynote. I always love when AWS Keynotes are centered mainly around improvements to existing services – and that’s exactly what Peter DeSantis offered us today. The best part is: it’s only Monday. There’s so much more to come for AWS re:Invent 2023. Stay tuned!

The post New AWS re:Invent Announcements: Monday Night Live with Peter DeSantis appeared first on Cloud Academy.

]]>
0
AWS re:Invent 2023: Guide to Everything You Need to Know https://cloudacademy.com/blog/aws-reinvent-2023-guide-to-everything-you-need-to-know/ https://cloudacademy.com/blog/aws-reinvent-2023-guide-to-everything-you-need-to-know/#respond Wed, 15 Nov 2023 10:19:17 +0000 https://cloudacademy.com/?p=56916 AWS re:Invent 2023 is upon us and - here at Cloud Academy - we can't wait for it to begin. Just as in years past, Amazon Web Services will come up with new fantastic announcements and initiatives.

The post AWS re:Invent 2023: Guide to Everything You Need to Know appeared first on Cloud Academy.

]]>
AWS re:Invent 2023 is upon us and – here at Cloud Academy – we can’t wait for it to begin. Just as in years past, Amazon Web Services will come up with new fantastic announcements and initiatives.
Since there are many things to keep in mind for this year’s event, in this full guide to AWS re:Invent 2023 we’ll go through each of them with the aim of providing you with all the helpful information to get the most out of it.

But let’s start from the beginning: are there any of you who still don’t know what AWS re:Invent is? Don’t worry, we’ll help with that, too!

Here are all the AWS re:Invent 2023 topics we’ll cover:

What is AWS re:Invent?

As mentioned on the official website:

AWS re:Invent is a learning conference hosted by AWS for the global cloud computing community.

The event is focused on developers and engineers, system administrators, systems architects, IT executives, and technical decision-makers.

When is AWS re:Invent 2023?

re:Invent 2023 will take place from November 27 to December 1, 2023. It will be a 5-day full immersion in which you’ll have the opportunity to attend technical sessions, keynotes, and more.

How to register for AWS re:Invent 2023?

AWS re:Invent 2023 will take place in Las Vegas, Nevada across multiple venues. But, like the previous years, it will adopt a hybrid mode. So – if you’re wondering what you should do to register for re:Invent 2023 – you need to know 2 important things:

  • You can attend the event in person or virtually. An in-person ticket costs $2099, while virtual access is free. If you want to register for the event, you need to go to the official registration page and choose the package you prefer.
  • Accounts are not carried over from previous years, so even if you had an account for re:Invent 2022, you will need to create a new one for 2023.

Where to stay during the AWS re:Invent 2023?

Which hotels are dedicated to AWS re:Invent? This is a topic that all in-person attendees should be interested in. Las Vegas is great to explore, but there will be so much to see at re:Invent too with people to meet.

Renamed “re:Invent campus“, AWS is promoting an immersive learning experience through an agreement with some of the most known hotels in the city.

The yellow properties are the AWS re:Invent venues. These are the places where AWS is hosting the event. Staying here would mean living and breathing AWS re:Invent and experiencing it from every angle.

Here are the six venues for this year’s event:

  • Encore – Planned activities will concern breakout sessions and bookable meeting space.
  • Wynn – Planned activities will concern breakout content and meals.
  • The Venetian | Palazzo – Planned activities will concern breakout sessions, registration, Expo, keynotes, content hub, and meals.
  • Caesars Forum – Planned activities will concern breakout sessions, content hub, and meals.
  • MGM Grand – Planned activities will concern breakout sessions, registration, content hub, and meals.
  • Mandalay Bay – Planned activities will concern breakout sessions, registration, content hub, and meals.

The blue properties are hotels with which AWS has entered into an agreement. The convention has arranged for room blocks and dedicated event discounts.

Here are the 10 sleeping-only hotels (i.e. no events happen here):

All the listed options provide transportation solutions between the various locations of the event.

Are there any health measures to follow?

AWS wants to protect the health of the event’s attendees, partners, and employees. Here are the General AWS re:Invent 2023 health measures:

  • Attendees are not required to show proof of vaccination against COVID-19.
  • Attendees are not required to provide a record of a negative COVID-19 test result.
  • Masks and social distance from the other attendees are not required.

Are there other things to be aware of?

Probably the most important: the AWS Code of Conduct. This list of principles explains the behavior AWS expects from its community members on the occasion of AWS events, blogs, online forums, and social media platforms. If a user breaches the code, AWS may prohibit him from attending future AWS events and interacting across blogs, online forums, and social media platforms. We recommend you read it before attending re:Invent.

What’s the AWS re:Invent agenda for this year?

The AWS re:Invent agenda is full of interesting initiatives. You can check the full program on the official site, but here are some quick highlights day by day (PST time zone):

What are the AWS re:Invent 2023 keynotes?

Hey wait, what are these “keynotes”? The AWS re:Invent 2023 keynotes are in-person talks where industry leaders from the world of cloud computing make important announcements, tell the latest product launches, and share inspiring customer stories.

The planned keynotes for this event’s year will be delivered by:

  • Adam Selipsky – CEO of AWS: Keynote on TUE., NOV. 28 | 8:30 AM – 10:30 AM
  • Peter DeSantis – Senior Vice President of AWS Utility Computing: Keynote on MON., NOV. 27 | 7:30 PM – 9:00 PM
  • Swami Sivasubramanian – Vice President of AWS Data and AI: Keynote on WED., NOV. 29 | 8:30 AM – 10:30 AM
  • Ruba Borno – Vice President of AWS Worldwide Channels and Alliances: Keynote on WED., NOV. 29 | 3:00 PM – 4:30 PM
  • Dr. Werner Vogels – Amazon.com Vice President and Chief Technology Officer: Keynote on THUR., NOV. 30 | 8:30 AM – 10:30 AM

What Innovation Talks are planned for this year?

And what about the AWS re:Invent Innovation Talks? This year, 17 AWS leaders will take the stage and talk about various topics: data for generative AI, cloud operations, storage, Machine Learning, security, and much more. If you want to discover them all, you should check the full list on the official site.

What is “PeerTalk”?

A format launched in the last event edition is PeerTalk. Yeah, but what is it? PeerTalk is an onsite networking program for the event’s attendees. The goal of this initiative is to “expand your mind and your network”. For more information, directly check the dedicated page.

Conclusion

Well, we’ve come to the end of this post but this is just a starting point. Yeah because in the next weeks -before and during re:Invent 2023 – new exciting announcements will come up. As said we can’t wait to see what AWS has in store for us this year. To stay up to date, we recommend that you keep an eye on Cloud Academy channels and social media.

See you there in-person and online!

The post AWS re:Invent 2023: Guide to Everything You Need to Know appeared first on Cloud Academy.

]]>
0
New at AWS re:Invent: Werner Vogels Keynote https://cloudacademy.com/blog/new-at-aws-reinvent-werner-vogels-keynote/ https://cloudacademy.com/blog/new-at-aws-reinvent-werner-vogels-keynote/#respond Wed, 07 Dec 2022 04:40:40 +0000 https://cloudacademy.com/?p=52566 Dr. Werner Vogels, Chief Technology Officer and Vice President at Amazon, gives his AWS re:Invent keynote speech on managing complexity.

The post New at AWS re:Invent: Werner Vogels Keynote appeared first on Cloud Academy.

]]>

Dr. Werner Vogels’ keynote is usually the most anticipated at AWS re:Invent conventions. It’s no surprise he delivers what can be considered the closing keynote. He took a break from Now Go Build – Season 3 production to visit with customers in Las Vegas at the AWS re:Invent 2022 conference. Dr. Vogels and his team have perfected the art of keynote delivery with a cinematic intro, some background education, guest speakers to support the talk’s theme, and always an inspiring set of new service introductions.

This year’s theme was “Managing Complexity by observing and imitating natural patterns in our systems”. This was a very entertaining and educational as well as inspirational keynote. There was something for everyone. Worth watching when you have an hour and a half to invest. 

The closing statement can bring the entire keynote into focus: 

“I hope that you can all agree that we can learn from something like looking around us and observing the greatest system in existence, the universe itself. The universe itself is extremely agile; It is extremely fault tolerant as well and resilient and robust. 

We should learn from the principles that we see in nature and the world around us when we start building our computer systems. I hope this talk has inspired you to build bigger, better bolder systems much faster.”

Let’s break this down as it happened. 

The cinematic intro: This was a “The Matrix Spoof” where a world that is completely ordered and synchronized one to one exists.   You realize very quickly that the concepts of throughput and latency become essential to be able and do anything.  High throughput, low latency, loosely coupled and events processing  are to be pursued in our systems if they are to continue delivering results despite a failure or malfunction.  He says, “Synchronous is a simplification, an abstraction, a convenience to write our programs, and finally an illusion” to be avoided in our systems design.  The world is asynchronous and event driven. We should imitate that as much as possible in our systems. 

The background education

“Synchrony implies tightly coupled systems.” and the initial Amazon S3 Design principles used in 2006 still apply to this day in order to avoid such a situation. 

Amazon S3 Design Principles: 

  1. Decentralization 
  2. Asynchrony 
  3. Local responsibility 
  4. Decompose into small well understood building blocks. 
  5. Autonomy 
  6. Controlled concurrency 
  7. Failure Tolerance 
  8. Controlled Parallelism 
  9. Symmetry 
  10. Simplicity 

Embracing asynchrony leads to loosely coupled systems and embracing workflows enable us to build applications from loosely coupled systems. 

The patterns for workflows have a general guiding set of principles which can be summarized as: 

  1. Sequence, 
  2. Retry, 
  3. Error Handling, 
  4. Parallel, 
  5. Based on data, 
  6. Concurrent iterative  

Loosely coupled systems implement: 

  1. Fewer dependencies 
  2. Failure isolation 
  3. Evolvable architectures

The natural sequence to this best practice is the journey from a monolithic application to evolve to service oriented architecture to a micro-service architecture that uses shared services via (IaaS). 

According to Gall’s Law: “All complex Systems that work evolved from simpler systems that worked.”  

“It’s evolve or perish when it comes to systems design and the best way to build systems that can evolve is to focus on event driven architectures.  Event driven architectures enable global scale. Event-driven architectures help development teams move faster. The world is built in patterns.”

Patterns in Event Driven Architectures include: 

  1. Change Data Capture 
  2. Asynchronous coupling 
  3. Self-healing replicators. 

As a way to ramp up in more detail on these ideas he offers the Amazon Distributed Computing Manifesto. You can also visit the Amazon Builders Library.

Finally, you can also take a look at the following Cloud Academy courses and have a hands-on experience with: 

The Service Introductions

1) Step Functions Distributed Map (Generally available)

A Serverless Solution for Large-Scale Parallel Data Processing

“Step Function’s map state executes the same processing steps for multiple entries in a dataset. The existing map state is limited to 40 parallel iterations at a time. This limit makes it challenging to scale data processing workloads to process thousands of items (or even more) in parallel. In order to achieve higher parallel processing prior to today, you had to implement complex workarounds to the existing map state component.”

“The new distributed map state allows you to write Step Functions to coordinate large-scale parallel workloads within your serverless applications. You can now iterate over millions of objects such as logs, images, or CSV files stored in Amazon Simple Storage Service (Amazon S3). The new distributed map state can launch up to ten thousand parallel workflows to process data. You can process data by composing any service API supported by Step Functions, but typically, you will invoke Lambda functions to process the data with code written in your favorite programming language.”

“This new capability is optimized to work with S3. I can configure the bucket and prefix where my data is stored directly from the distributed map configuration. The distributed map stops reading after 100 million items and supports JSON or CSV files of up to 10GB.”

“When processing large files, think about downstream service capabilities. Let’s take Lambda again as an example. Each input—a file on S3, for example—must fit within the Lambda function execution environment in terms of temporary storage and memory. To make it easier to handle large files, Lambda Powertools for Python introduced a new streaming feature to fetch, transform, and process S3 objects with minimal memory footprint. This allows Lambda functions to handle files larger than the size of their execution environment. To learn more about this new capability, check the Lambda Powertools documentation.” (per the AWS launch blog.)

See: https://aws.amazon.com/blogs/aws/step-functions-distributed-map-a-serverless-solution-for-large-scale-parallel-data-processing for details.

 2) AWS Application Composer (Preview)

Visually design and build serverless applications quickly. Visual canvas makes composing serverless applications easier. Rapidly generate ready to deploy infrastructure as code (IaC) that follows best practices. Maintain a model of your architecture that’s easy to share and build with team members.

The call to action by Dr. Vogels: “I urge you to start with AWS application Composer to make it easy for you to get started with building these applications.”

“AWS Application Composer is a visual builder that makes it easier for developers to design an application architecture by dragging, grouping, and connecting AWS services in a visual canvas. Developers can start a new architecture from scratch, or they can import an existing AWS CloudFormation or AWS Serverless Application Model (SAM) template, including those generated by AWS Application Composer. The AWS Application Composer experience is focused around common serverless services like AWS Lambda, AWS Step Functions, and Amazon EventBridge, but it can be used to compose any AWS service supported by AWS CloudFormation resources. Developers can export infrastructure as code (IaC) to incorporate into their existing processes, such as local testing with AWS SAM Command Line Interface (CLI), peer review through version control, or deployment through CloudFormation and continuous integration and delivery (CI/CD) pipelines.” 

Image: 

Image: SRC AWS Documentation

See the following URLs for details: 

3) Amazon EventBridge Pipes (Generally available)

Connects event producers and consumers in seconds. Build advanced integrations in minutes with enhanced security, reliability and scalability out of the box. 

“So basically this is pipes on steroids because it’s not just easily composed, It also has the ability to manipulate the events that are flowing through your pipe.” Dr. Werner Vogels

Write less code – Build fully managed integrations quickly with an easy-to-use interface that connects event producers with consumers.

Save costs with filtering and built-in integrations – Lower compute and usage costs by using event filters and built-in integrations – only process and pay for events that you need.

Source events in real-time – Deliver events to over 14 AWS services with EventBridge Pipes seamlessly polling for new events.

Reduce operational load – Reliably connect services without worrying about scalability, security, patching, and provisioning your infrastructure.

Image: SRC AWS Documentation

See the following URLs for details: 

4) Amazon CodeCatalyst (Preview)

Spark innovation. Accelerate delivery. 

Unified software development service to quickly build and deliver applications on AWS.

Create new projects with everything you need in minutes

CodeCatalyst project blueprints automatically setup everything you need to start a new software development project, including CI/CD, deployable code, issue tracking, and AWS services configured according to best practices.

Spend more time coding and less time managing local development environments

CodeCatalyst Dev Environments are available on-demand in the cloud and are automatically created with branch code and consistent project settings, providing faster setup, development, and testing.

Collaborate with your team using built-in features such as issue tracking

Start creating and assigning issues. Use priorities, estimates, labels, and custom fields to help your team prioritize what’s next. Exchange feedback on issues and pull requests with your team.

Quickly customize your pipelines and fully-managed build environments

Simplify the creation and customization of automated workflows by configuring pre-defined actions using the visual editor or directly editing the underlying YAML configuration. Along with a collection of CodeCatalyst actions, you can also use any GitHub Action in your workflows.

Level up your testing strategy with easily added quality gates

CodeCatalyst has built-in support for code coverage, software composition analysis, and unit tests. You can view your test results at a glance with automated visual test reporting.

The easiest way to deploy to AWS

CodeCatalyst makes it easy to deploy your application to AWS services such as AWS Lambda or Amazon ECS. You can deploy application stacks across accounts or AWS Regions by simply listing them as targets in your pipeline

Track progress across your team

CodeCatalyst’s notifications and personalized activity feed help you stay updated on relevant project activity such as successful deployments or accepted pull requests.

(Per the product page)

Visit the following URLs for details:

The keynote included a significant amount of additional material including guest speakers.  

Angela Timofte – Director, Engineering, Trustpilot and AWS Hero, spoke about the TrustPilot event driven architecture and how it has allowed them to scale to  “Scan 100% of reviews coming through while using amazon step functions orchestrate workflows to take action on abusive behavior or unusual patterns in reviews.”. 

Nathan Thomas, VP, Unreal Engine, Epic Games introduced their app “RealityScan for IOS” and demonstrated the highly sophisticated tools available from Unreal Engine to build. Some of the items that were discussed are the Matrix awakens, an Unreal Engine Five experience, meta human creator is an online web service which allows you to generate high fidelity digital humans in minutes and those are free for use in Unreal engine.   Great tools for game designers and these tools are being used in order industries as well. 

The above are only a few of the many subjects discussed on the keynote. It’s certainly worth watching and thank you for reading this post.  

The post New at AWS re:Invent: Werner Vogels Keynote appeared first on Cloud Academy.

]]>
0
New at AWS re:Invent: Partner Keynote with Ruba Borno, VP of Channels and Alliances https://cloudacademy.com/blog/new-at-aws-reinvent-partner-keynote-with-ruba-borno/ https://cloudacademy.com/blog/new-at-aws-reinvent-partner-keynote-with-ruba-borno/#respond Fri, 02 Dec 2022 02:40:47 +0000 https://cloudacademy.com/?p=52439 AWS VP of Channels and Alliances, Ruba Borno, gives a keynote on Amazon's efforts to help customers implement solutions in the field.

The post New at AWS re:Invent: Partner Keynote with Ruba Borno, VP of Channels and Alliances appeared first on Cloud Academy.

]]>

As it is every year, this week is pretty exciting for Cloud Computing and AWS in particular!

Let’s take a look at this year’s announcements during the Partner Keynote.

Ruba Borno has been with AWS for about a year now and she is currently leading the effort for worldwide channels and alliances for AWS. This is a key role for Amazon since partners are focused on specific areas and help customers implement solutions in the field.

AWS Partners are a tremendous driving force of innovation, pushing boundaries and immediately taking advantage of the latest and greatest service offerings to their customers! 

Without further ado, let’s see what we got going on this year!

AWS Partner Solution Factory, now in Preview

AWS is bringing customers and partners closer together with this solution.

The goal is to allow collaboration between AWS Experts and partners to storyboard, design, architect and demo actual solutions to help customers solve their technical challenges.

In this case, imagine that you are a customer and you hire a consulting partner to implement a solution for your business. Once your solution is implemented, you could have AWS in-house experts perform a well-architected review of the solution, giving you peace of mind that your solution is rock-solid, secure and highly-available.

Even better if these solutions are pre-built, well-tested and already-deployed to existing AWS customers! That’s the power of this offering!

Introducing AWS Marketplace Vendor Insights

This is a service intended to help assess risk by enabling sellers to make security and compliance information available in the marketplace. Using the provided web-based dashboard, sellers and buyers will be able to see governance, compliance, and related information including data privacy, access control, application security, and more.

It also helps sellers promote their security posture, which reduces the repetitive task of responding to buyers that require risk assessment information.

Initially, the seller creates a security profile which then can be used to grant access to buyers where they can see AWS-sourced evidence, directly from the seller’s environment. This includes AWS Config data, AWS Audit Assessment, external audits ( such as SOC2 and ISO27001), and of course, seller self-assessments as well.

The buyer can review this data on their dashboard or download it for importing in their own vendor management software.

This service can help shorten months of back-and-forth with questionnaires which shortens lead times and keeps everybody honest by monitoring expiration dates (for example) for compliance certifications.

There is no cost associated with this service and it’s available right now in any region where the AWS Marketplace is available.

Announcing the preview release of Data Exchange for AWS Lake Formation

Just a quick refresher on Data Exchange, it’s a service introduced back in 2019 and it is used as a marketplace to connect third-party data providers with customers that need their data. It’s a very efficient way to make your cloud applications smart by using up-to-date data sources directly from producers.

This new feature announced in Preview today is specific to AWS Lake Formation and it allows subscribers to find and use third-party datasets that are managed directly through Lake Formation.

If you recall, Lake Formation is AWS’ answer for those in need of Data Lakes which can then be used to perform analytics and machine learning across various data sets.

Once subscribed, you will see the third-party data set in Lake Formation and then be able to query, transform and share it within your AWS account or your AWS Organization, using License Manager. The benefit is that the data just shows up as any other piece of data in your Lake Formation setup, so, there’s nothing new to learn or overhead to manage by your data science team: No importing, no ETL, no pipelines… just straight to analysis!

This feature (again, still in preview!) is available in most AWS Regions!

Teaching Cloud Computing

As always, AWS and its partners try their best to upskill IT personnel around the world to close the gap between opportunities in cloud computing and those who can’t fill those jobs due to a lack of skills.

There was a good portion of the keynote dedicated to showcasing a couple of engineers in Brazil whose lives were changed thanks to learning the skill of cloud computing (AWS is widely-adopted in Brazil). 

It was very motivating and I can relate to them because 6 years ago I decided to upskill myself ( using CloudAcademy, no less! ) and here I am today, teaching the next generation “how to Cloud”!

Cloud Academy aligns itself with those goals as well, as we continue to produce up-to-date content to help our audience reach high and achieve goals in cloud computing!

Moving forward together

The theme bringing everything together for this keynote was that of working towards a common goal: AWS, its partners and customers, in order to create a positive impact in the world.  I thought it was well put together as it showed the social aspect of what we do.  

And that wraps it up for the partner keynote 2022! Hopefully, you can relate your current projects to one or more of these releases and start taking advantage of them immediately! 

I am personally interested in the S3 Bucket offering for Data Exchange! 

Hmmm… I wonder if there’s still a data set out there that’s valuable and nobody is looking at it!

What’s next on re:Invent?

I look forward to the keynote by Werner Vogels, that one is sure to have at least one or two of those impactful features we all look forward to!  Enjoy the rest of re:Invent week!

The post New at AWS re:Invent: Partner Keynote with Ruba Borno, VP of Channels and Alliances appeared first on Cloud Academy.

]]>
0
New at AWS re:Invent: Swami Sivasubramanian Keynote https://cloudacademy.com/blog/new-at-aws-reinvent-swami-sivasubramanian-keynote/ https://cloudacademy.com/blog/new-at-aws-reinvent-swami-sivasubramanian-keynote/#respond Wed, 30 Nov 2022 23:01:52 +0000 https://cloudacademy.com/?p=52398 AWS's VP of Data and Machine Learning announces new services and features at re:Invent 2022, including Amazon Athena for Apache Spark.

The post New at AWS re:Invent: Swami Sivasubramanian Keynote appeared first on Cloud Academy.

]]>
On November 30th, I had the privilege of (virtually) attending Swami Sivasubramanian’s keynote address at re:Invent 2022. Swami is the Vice President of Data and Machine Learning at AWS, whose team’s mission is, “To make it easy for organizations and developers by providing the best set of capabilities to store and query data (to build scalable data driven apps), analyze and visualize their data (to do analytics) and put their data to work through machine learning.” This space is always ripe for innovation and new service releases, so let’s see what this year has in store!

Dr Swami

Swami began his keynote by introducing the three core elements of an organization’s data strategy:

  1. Build future-proof foundations supported by core data services
  2. Weave connective tissue across your organization
  3. Democratize data with tools and education
Core elements of a data strategy

Each of today’s announcements can be viewed through the lens of one of these core elements, which are increasingly important to all modern organizations. In fact, Swami revealed some incredible numbers around the AWS customer base:

  • More than 1.5 million AWS customers use database, analytics, or machine learning AWS services.
  • Among the top 1000 AWS customers, 94% of them use 10 or more different database and analytics services!

With that in mind, it’s fitting that the first announcement would center around one of the most popular analytics services, Amazon Athena!

Amazon Athena for Apache Spark

Amazon Athena for Apache Spark

For today’s first major announcement, Swami talked at length about the popularity of Amazon Athena, a serverless service that makes it easy to analyze data in Amazon S3 using traditional SQL. While Athena is great for simple analysis, customers have increasingly needed to adopt Apache Spark to build more complex, distributed data analytics applications. Unfortunately, this complexity requires customers to provision and maintain the infrastructure needed to run Apache Spark.

However, taking on the burden of managing Spark infrastructure is no longer necessary with the announcement of Amazon Athena for Apache Spark! This new, completely serverless offering is now generally available and will allow users to run interactive analytics workloads in Apache Spark. Best of all, it’s completely serverless and can be spun up in under one second! Its performance is up to 75 times faster than other serverless Spark offerings. Customers can now build robust Spark applications directly from Athena without any of the headaches typically associated with maintaining a Spark cluster. Pricing for this service will be based on the amount of compute usage (defined by the data processing unit, or DPU) per hour.

Amazon DocumentDB Elastic Clusters

Amazon DocumentDB Elastic Clusters

Swami spoke about the pain points many organizations experience when scaling write operations beyond a single DocumentDB instance. Complex sharding logic is difficult and time-consuming to implement, and customers have been clamoring for something of an “easy button” when it comes to scaling read and write operations. For these customers, the announcement of Amazon DocumentDB Elastic Clusters will come as very welcome news!

DocumentDB Elastic Clusters allow databases to elastically scale up to millions of operations in just minutes, with zero impact on application availability or performance. All of the sharding logic is handled automatically on your behalf, with each shard having its own associated compute and storage resources, all of which are managed automatically by AWS. Elastic Clusters are highly available by default and allow your workloads to scale up to millions of read/write operations per second and up to petabytes worth of storage.

Geospatial ML Support in Amazon SageMaker

Geospatial ML Support in Amazon SageMaker

One of the more exciting new announcements today was the preview release of new Geospatial ML support for Amazon SageMaker. Many organizations face challenges when it comes to finding high quality geospatial data that they can use to train machine learning models. These challenges often extend beyond simply finding this data to being able to import this data, which is usually quite large, then visualize it using tools that are typically somewhat feature-limited.

This release aims to address all of these pain points. SageMaker now features interactive mapping capabilities with robust geospatial data now readily available. In a very compelling demo of this new release, we saw how tools such as a pre-trained road extraction model can be used in conjunction with POI data from Foursquare to assist first responders and aid workers looking to find passable roads after a significant flooding event.

Amazon Redshift Multi-AZ

Amazon Redshift Multi-AZ

Next, Swami announced an important feature update for Amazon Redshift: Multi-AZ support! This support allows you to make your Redshift data warehouse deployments highly available, providing you with guaranteed capacity to automatically failover in the event of an outage in your data warehouse’s primary availability zone at a fraction of the cost of maintaining separate standby instances. Best of all, no changes to your applications are required. This update is now available in preview and represents an important step in the journey to fully protect an organization’s data from core to perimeter.

Trusted Language Extensions for PostgreSQL

Trusted Language Extensions for PostgreSQL

Swami talked about PostgreSQL and how it’s the fastest-growing database platform on AWS, both on Amazon RDS and Amazon Aurora. In particular, database developers love PostgreSQL because of its extensibility model that allows you to build extensions using popular programming languages such as JavaScript and Perl. In fact, AWS already supports several dozen PostgreSQL extensions in both RDS and Aurora.

To further demonstrate AWS’ commitment to both open source and PostgreSQL extensions, Swami announced a brand new open-source project to support PostgreSQL extensions: Trusted Language Extensions for PostgreSQL. This new open source project is licensed under the Apache License 2.0 and will allow developers to safely use and install PostgreSQL extensions on both RDS and Aurora using popular programming languages right away, without needing to wait for AWS to certify them first.

Amazon GuardDuty RDS Protection

Amazon GuardDuty RDS Protection

Speaking of Aurora, the next announcement gives Aurora an important security boost: Amazon GuardDuty RDS Protection! This update further enhances the robust, intelligent threat detection capabilities of GuardDuty by allowing you to identify suspicious or malicious activity within your Aurora databases.

Available in preview today, GuardDuty RDS Protection will profile and monitor all access to your Aurora databases and when findings are identified, can be issued to the following:

  • GuardDuty console
  • AWS Security Hub
  • Amazon Detective
  • Amazon EventBridge

This allows you to seamlessly integrate RDS protection into your existing security applications and workflows.

AWS Glue Data Quality

AWS Glue Data Quality

In addition to data security and availability, it’s critically important that the data in an enterprise’s data lake meet certain data quality standards, as the quality of any data-driven decisions will always directly correlate with the quality of the data itself. To support this, AWS Glue Data Quality offers a new (preview) set of features for AWS Glue that can automatically assess your tables and generate data quality rules, enabling better decision making and more importantly, significantly reducing the amount of effort required to ensure your data lakes and data warehouses are filled with quality data.

These data quality rules can ensure everything from the presence of data or required length of data within a particular column, to valid date or other identifier ranges and much more, promising to reduce manual efforts from days down to hours!

Centralized Access Controls for Redshift Data Sharing

Centralized Access Controls for Redshift Data Sharing

Turns out we aren’t done with enhancements to Redshift just yet! Swami went on to discuss the importance of an end-to-end governance strategy when it comes to an organization’s critical data lakes, data warehouses, and machine learning. To that end, he referenced the 2021 release of row and cell-level permissions within AWS Lake Formation, which then segued into the next announcement: Centralized Access Controls for Redshift Data Sharing.

This feature update, which is now available in preview, allows you to centrally manage access controls for Redshift data using AWS Lake Formation without requiring complicated queries or manual, labor-intensive scripting. This simplified governance makes it easy to manage access and security for your Redshift data down to the individual row and column level.

Amazon SageMaker ML Governance

Amazon SageMaker ML Governance

Governance continues to be an important theme in today’s next release: Amazon SageMaker ML Governance. Much like the centralized access controls for Redshift data sharing, this enhanced suite of governance tools for SageMaker will increase transparency and simplify access controls across your organization’s ML lifecycle through the following:

  • Role Manager allows you to define custom permissions for SageMaker users.
  • Model Cards allow you to simplify documentation of your ML models throughout their lifecycle.
  • Model Dashboard allows you to have a single unified view of all your ML models in a single location.

Together, these tools will provide much more robust governance and auditability across your ML development lifecycle.

Amazon Redshift auto-copy from S3

Amazon Redshift auto-copy from S3

We’ve already shown Redshift so much love today (and this week if you include Adam Selipsky’s announcements yesterday around Aurora Zero-ETL integration with Redshift along with Redshift integration for Apache Spark), but we aren’t done yet! Swami announced one additional preview feature: Amazon Redshift auto-copy from S3. This feature promises to drastically simplify the process of loading your data from S3 into Redshift.

Instead of running manual copy statements every time you wish to load data from S3 into Redshift, Amazon Redshift auto-copy from S3 allows you to create Copy Jobs that will continuously and automatically load new objects directly into Redshift as they are added to S3. This allows you to fully automate a simple data ingestion pipeline without requiring any ongoing engineering effort. Very cool stuff!

New Data Connectors for AppFlow and Data Sources for SageMaker Data Wrangler

New Data Connectors for AppFlow and Data Sources for SageMaker Data Wrangler

The next couple of announcements centered around expanding the ability to use services like Amazon AppFlow and Amazon SageMaker Data Wrangler to bring together information from different systems and data stores, including on-premises applications, SaaS applications, and AWS services.

Swami began by announcing 22 new data connectors for Amazon AppFlow. These include marketing connectors such as LinkedIn Ads and Google Ads. He then followed this by announcing over 40 new data sources for Amazon SageMaker Data Wrangler (again including LinkedIn Ads and Google Ads, among many others). Together, these connectors and data sources will help organizations build integrated analytics applications and predictive models that can guide and enhance an organization’s decision-making process.

AWS Machine Learning University now provides educator training

AWS Machine Learning University now provides educator training

Swami’s final announcement centered around the final core element of the data strategy he introduced at the beginning of his keynote: Democratize data with tools and education. Noting the incredible gap that exists between the 54,000 Computer Science graduates our nation’s colleges and universities produce annually and the estimated 1 million AI and ML jobs that our economy will produce by the year 2029, Swami proudly announced that AWS Machine Learning University (MLU) now provides educator training.

AWS MLU was launched in 2018 as a way to give developers self-service access to the same machine learning training AWS uses internally in an effort to educate the next generation of data developers. This update enhances MLU with “train the trainer” resources for educators, especially those at community colleges, minority-serving institutions, and HBCUs to establish courses, certificates, and degree programs in the areas of data analytics, AI, and ML. Best of all, as part of this effort, faculty and students get free access to instructional materials and a live ML development environment using Amazon SageMaker Studio Lab. This exciting announcement promises to help close the future skills gap as well as provide important diversity in the growing field of data science.

In Closing

Swami closes by saying, "A spark starts with one."

I entered today’s keynote full of excitement and anticipation, and Swami did not disappoint. I’ve been thoroughly impressed by the breadth and depth of announcements and new service releases already this week, and it’s only Wednesday! Keep an eye on our blog for more exciting keynote announcements from re:Invent!

The post New at AWS re:Invent: Swami Sivasubramanian Keynote appeared first on Cloud Academy.

]]>
0