Get started on Cloud Academy's | Cloud Academy Blog https://cloudacademy.com/blog/category/webinar/ Mon, 17 Oct 2022 07:51:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 The Biggest Challenges for Technology Leaders https://cloudacademy.com/blog/the-biggest-challenges-for-technology-leaders/ https://cloudacademy.com/blog/the-biggest-challenges-for-technology-leaders/#respond Tue, 29 Jun 2021 04:00:07 +0000 https://cloudacademy.com/?p=46546 In our first post, Ben Keylor talked about the lay of the land in 2021: A business world already rapidly accelerating toward cloud infrastructure for its speed and agility, given an extra turbo boost due to the chaos of the COVID-19 pandemic. We saw that enterprises are having a hard...

The post The Biggest Challenges for Technology Leaders appeared first on Cloud Academy.

]]>
This is the second of two blog posts covering insights Cloud Academy gleaned from our recently published 2021 Technology Leadership Survey.

In our first post, Ben Keylor talked about the lay of the land in 2021: A business world already rapidly accelerating toward cloud infrastructure for its speed and agility, given an extra turbo boost due to the chaos of the COVID-19 pandemic.

We saw that enterprises are having a hard time figuring out how to upskill and hire effectively, with there being a huge overhead for both – especially onboarding new talent to achieve your teams’ goals.

But still, even with the inherent costs of hiring, getting the right people to help you keep innovating is still the number-one priority of managers for the rest of the year. So what other concerns do tech leaders have for 2021 and beyond?

What are tech leaders’ main worries and challenges?

We asked this very question to our survey respondents. At the time of asking, Q1 of 2021 had just wrapped up, and we wanted to know their main pain points as they looked ahead to the rest of the year and 2022.

Biggest Technology Concerns

Not surprisingly, keeping tabs on their security posture and regulatory compliance rank at the top of respondents’ concerns. But right up there with security and compliance are concerns about transformational activities and the talent to reach those objectives.

Cloud migration and other transformational activities

At just over 20%, concerns about big moves such as cloud migration are a huge deal for our respondents. This makes sense, as regardless of if you’re an enterprise or a large consultant tasked to coach an enterprise, the need is the same: you’ll have to bring a great deal of technical knowledge combined with a convincing effort to enact culture change.

This type of organizational-level technical knowledge is best held together with a good, sharable structure. We dive deeply into just that through a five-step process for migrations in our Cloud Migration series. The steps are:

  1. Define your strategy
  2. Start planning
  3. Assess readiness
  4. Adopt a cloud-first mindset
  5. Manage and iterate
2021-tech-leadership-survey-banner

Coming in third behind security/compliance and digital transformation, 18% of respondents say that their talent isn’t up to par in order to reach business objectives. As we dug more into the data, we found that people surveyed from companies with 500-1,000 employees worry about insufficient talent at a rate 2.5 times higher than those with a workforce greater than 1,000. What might cause this? It’s a hunch, but it could be that these mid-market companies more acutely feel the strain of competition from larger enterprises who have more access to resources and talent.

Join our chat about the full survey results

The full results of the 2021 Technology Leadership Survey provide many more key insights around enterprise investment in the cloud, current IT architectures, and the biggest technology concerns facing tech leaders today.

Join Cloud Academy’s interactive webinar on June 30th at 9 a.m. PT | 12 p.m. ET | 6 p.m. CET. Attendees will be asked polling questions to see how their results compare to survey respondents’, and we’ll dive into what the results mean for you and your business. Don’t miss out!

2021-technology-leadership-survey-webinar

The post The Biggest Challenges for Technology Leaders appeared first on Cloud Academy.

]]>
0
Why Skills Development Is Critical for Tech Success https://cloudacademy.com/blog/why-skills-development-is-critical-for-tech-success/ https://cloudacademy.com/blog/why-skills-development-is-critical-for-tech-success/#respond Tue, 22 Jun 2021 12:38:20 +0000 https://cloudacademy.com/?p=46499 In the recently published 2021 Technology Leadership Survey, Cloud Academy found that the majority (55%) of IT leaders either listen to anecdotal feedback from team leads or simply have no idea where the individuals on their team stack up.

The post Why Skills Development Is Critical for Tech Success appeared first on Cloud Academy.

]]>
The digital skills gap has been well recognized and written about for years. This conundrum is now coming to a head for employers across the board as companies have made the move to a digital-first approach with cloud computing at the heart of their IT infrastructures.

The digital skills gap has been well recognized and written about for years. This conundrum is now coming to a head for employers across the board as companies have made the move to a digital-first approach with cloud computing at the heart of their IT infrastructures.

Improved speed and agility, cost reduction, energy savings, and business continuity are just a few of the business benefits enabled by the cloud. Still, until recent years, some industries were hesitant to adopt the cloud due to the complexities of their legacy systems.

We can confidently say that is no longer the case. Enterprises who were biding their time have had their hands forced by the impacts of the COVID-19 pandemic, where a remote workforce sustained the digital economy. To meet customer demands and compete amongst expectations of efficient, globalized service delivery, the cloud native world is here to stay — and the need for qualified tech talent is on the rise.

Upskilling vs. Hiring – How to know what to do?

The ability to attract and retain workers with the right skills has become a key differentiator for winning organizations. Even for those who hire well, the speed at which technology is advancing makes it difficult for employees to keep up with, and for employers to have a line of sight into where gaps may lie.

Hiring cycles for specialty tech talent are both long and expensive. It’s been reported that the cost to hire one software developer (excluding onboarding, ramp time, and the unexpected cost of turnover) can exceed $50,000. This has made skills development a critical piece of the success puzzle for technology leaders. But to know who is ready, willing, and capable of learning new skills to take on a new role remains a unique challenge on its own.

In the recently published 2021 Technology Leadership Survey, Cloud Academy found that the majority (55%) of IT leaders either listen to anecdotal feedback from team leads or simply have no idea where the individuals on their team stack up. This information is critical to make decisions on whether upskilling can take an organization to where it needs to be, or if finding talent on the open market — a much more expensive and risky option — is required. 

2021-tech-leadership-survey-banner

The fact that less than half of enterprise technology leaders can report a concrete understanding of the talent they have versus what they need is troublesome. To learn how IT leadership is addressing the problem, we asked them to rank what resources they need to reach their goals through 2021 and beyond.

Tech Talent is Priority #1

In response, nearly 75% of leaders said that upskilling their current employees to perform new roles is a key component to achieve tech goals, and more than half (51%) say they will need to bring in new talent to do so. 

Just like hiring costs money (though a significant amount more than upskilling), so does a proper training program. To that end, we asked tech leaders how much they predict they’ll invest in improving digital skills for their teams in 2021, compared to last year.

IT-investment-tech-skills-2021

It’s great to see that management recognized the importance of skills development to not only keep up with evolving technologies, but to compete in the marketplace. More than 80% of tech leaders are investing more in skills development in 2021.

See the Full Results

The full results of the 2021 Technology Leadership Survey provide many more key insights around enterprise investment in the cloud, current IT architectures, and the biggest technology concerns facing tech leaders today.

Watch the on-demand replay of our webinar for a full review.

The post Why Skills Development Is Critical for Tech Success appeared first on Cloud Academy.

]]>
0
Microservices: Using Distributed Tracing for Monitoring & Troubleshooting https://cloudacademy.com/blog/microservices-using-distributed-tracing-monitoring-troubleshooting/ https://cloudacademy.com/blog/microservices-using-distributed-tracing-monitoring-troubleshooting/#respond Fri, 12 Jul 2019 09:55:52 +0000 https://cloudacademy.com/?p=36174 Modern applications can be found everywhere today. Distributed microservices, cloud-native, managed resources, and serverless are parts of this complex whole. But how can we keep track of so many elements in our production environments? In these distributed environments, microservices communicate with each other in different ways: synchronous and asynchronous. Distributed...

The post Microservices: Using Distributed Tracing for Monitoring & Troubleshooting appeared first on Cloud Academy.

]]>
Modern applications can be found everywhere today. Distributed microservices, cloud-native, managed resources, and serverless are parts of this complex whole. But how can we keep track of so many elements in our production environments?

In these distributed environments, microservices communicate with each other in different ways: synchronous and asynchronous. Distributed tracing has become a crucial component of observability — both for performance monitoring and troubleshooting.

In this article, I’m going to discuss some key topics in instrumentation, distributed tracing, and modern distributed applications. To better understand these topics, watch our webinar on Distributed Tracing in Modern Applications

What is distributed tracing?

Tracing is a way of profiling and monitoring events in applications. With the right information, a trace can reveal the performance of critical operations. How long does a customer wait for an order to be completed? It can also help to a breakdown of our operations to our database, APIs, or other microservices.

Distributed tracing is a new form of tracing that adapted better to microservice based applications. It allows engineers to see traces from end to end, locate failures, and improve overall performance. Instead of tracking the path within a single application domain, distributed tracing follows a request from start to end.

For example, a customer makes a request on our website and then we update the item suggestion list. As the request spans across multiple resources, distributed tracing takes into account the services, APIs, and resources it interacts with.Microservices applications diagram

Applications become more and more distributed

Automated microservices instrumentation

Exploring distributed traces might sound simple, but collecting the right traces with the right context will require considerable time and efforts. Let’s follow an example where we got an e-commerce website that updates our database with purchases:

Microservices (not distributed) diagram

In this example, which is not distributed, to create an interesting trace, we will need to collect the following information:

  1. HTTP request details:
    1. URL
    2. Headers
    3. The ID of the user
    4. Status code
  2. Spring Web:
    1. Matched route and function
    2. Request params
    3. Process duration
  3. RDS database:
    1. Table name
    2. Operation (SELECT, INSERT, …)
    3. Duration
    4. Result
To capture this information we can either do it manually before and after every operation that we make in our code or automatically instrument it into common libraries.

By “automated instrumentation,” we mean “hooking” into a module. For example, every time we make a GET request with “Apache HttpClient,” there will be a listener. It will extract and store this information as part of the “trace.”

Collecting this information manually using logging is not recommended since they are not structured well. Using a more standard way, like OpenTracing, will allow us to filter out relevant traces. We will also have the option to present them nicely in many tools. For example, Python might look like this:

HTTP request in Python with OpenTracing

Capturing HTTP request in Python with OpenTracing

As you can see, this kind of instrumentation requires heavy lifting. It involves integrating to our libraries, as well as constant maintenance to support our dynamic environments.

Standards and tools

OpenTracing

Luckily for us, there are already microservices standards and tools that can help us to get started with our first distributed traces. The first pioneer was OpenTracing, which is a new, open distributed tracing standard for applications and OSS packages.

Using OpenTracing, developers can collect traces into spans, and store extra context (data) to each one of them. For example:

OpenTracing Code

Spans can have a relation – `child of` or `follows from`. These relations can help us get a better understanding of performance implications.

To trace a request across distributed microservices spans, we must implement the inject/extract mechanism to inject a unique “transaction ID.” Then we would extract it on the receiving service. Note that a request can travel between microservices in HTTP requests, message queues, notifications, sockets, and more.

Another common standard is OpenCensus which collects application metrics and distributed traces. OpenCensus and OpenTracing recently merged into a unified standard called OpenTelemetry.

Jaeger

After the exhaustive task of collecting distributed tracing, comes the part of visualizing them. The most popular open source tool is Jaeger, which is also compatible with OpenTracing format. Jaeger will output our traces into a timeline view, which will help us understand the flow of the request. It can also assist in detecting performance bottlenecks:

Jaeger for detecting performance bottlenecks

Managed solution

Ultimately, you might want to consider an automated distributed tracing solution. Epsagon, for example, uses automated instrumentation to provide microservices performance monitoring and visualization of requests and errors in an easier way:

Automated microservices diagram

A managed solution for distributed tracing provides the following benefits:

  1. Traces are being collected automatically without code changes.
  2. Visualizing traces and service maps with metrics and data.
  3. Query data and logs across all traces.

Summary

Distributed tracing is crucial for understanding complex, microservices applications. Without it, teams can be blind into their production environment when there is a performance issue or other errors.

Although there are standards for implementing, collecting, and presenting distributed traces, it is not that simple to do manually. It involves a lot of effort to get up and running. Leveraging automated tools or managed solutions can cut down the level of effort and maintenance, bringing much more value to your business.

To deep dive into microservices, check out Cloud Academy’s Microservices Applications Learning Paths, Courses, and Hands-on Labs. Microservices: Cloud Academy Training

The post Microservices: Using Distributed Tracing for Monitoring & Troubleshooting appeared first on Cloud Academy.

]]>
0
Introduction to Monitoring Serverless Applications https://cloudacademy.com/blog/introduction-to-monitoring-serverless-applications/ https://cloudacademy.com/blog/introduction-to-monitoring-serverless-applications/#respond Thu, 21 Feb 2019 18:14:34 +0000 https://cloudacademy.com/?p=30054 Serverless as an architectural pattern is now widely adopted, and has quite rightly challenged traditional approaches when it comes to design. Serverless enhances productivity through faster deployments bringing applications to life in minimal time. Time not spent provisioning and maintaining production infrastructure can be invested elsewhere to drive business value...

The post Introduction to Monitoring Serverless Applications appeared first on Cloud Academy.

]]>
Serverless as an architectural pattern is now widely adopted, and has quite rightly challenged traditional approaches when it comes to design. Serverless enhances productivity through faster deployments bringing applications to life in minimal time. Time not spent provisioning and maintaining production infrastructure can be invested elsewhere to drive business value – because at the end of the day that’s what matters!

Now, with your Serverless application shipped into production, maintaining optimal performance requires you to focus in on the operational question of “what’s going on in production?”. In other words, you’ll need to address observability for every operation that takes place within your Serverless application.

Observability

Observability, bounds together many aspects – monitoring, logging, tracing, and alerts. Each observation pillar provides critical insight into how your deployed Serverless application is working and collectively whether your Serverless application is not just working but deriving real business value.

In this post, we are going to discuss each observation pillar, providing you with examples and solutions, which specifically address the Serverless ecosystem.

Monitoring

Monitoring is the main pillar which tells us, “is my system working properly?”. “Properly” can be defined by multiple parameters:

  1. Errors: every request or event that yielded an error result
  2. Latency: the amount of time it takes for a request to be processed
  3. Traffic: the total number of requests that the resource is handling

Compounded together, monitoring allows us to detect highly errored services, performance degradation across our resources, and even scaling issues when we hit higher traffic rates.

Much of our serverless deployment is undertaken within a Function as a Service (FaaS). FaaS provides us with our base compute unit, the Function. Popular examples of cloud-hosted managed FaaS services are:

Using AWS Lambda as our FaaS of choice, monitoring of Lambda functions is accomplished by using CloudWatch Metrics. With CloudWatch Metrics, every deployed function is monitored using several insightful metrics:

Serverless Dashboard

These metrics include:

  1. The number of invocations.
  2. Min/avg/max duration of invocations.
  3. Error counts, and availability (derived from errors/invocations ratio).
  4. The number of throttled requests.
  5. Iterator Age – The “age” of the oldest processed record.

For serverless, we are still missing some unique issues such as timeouts, out of memory and even cost, which we can monitor using a dedicated serverless monitoring and troubleshooting tool, Epsagon:

Serverless Monitoring

Logging

When a problem has been found according to our monitoring parameters, we then need to troubleshoot it. We accomplish this by consulting and analyzing all relevant logs.

Logs can be generated by prints, custom logging, and/or exceptions. They often include very verbose information – that’s why they are a necessity for debugging a problem.

When approaching our logs, we need to know what we are looking for, so searching and filtering within and across logs is essential. In AWS Lambda, all of our logs are shipped to AWS CloudWatch Logs. Each function is assigned to its own log group and one log stream for each container instance.

Log Archive

Once we find the correct log file, we can see the error message and gain a better understanding of what initiated the error.

There are better ways to look for logs other than just using CloudWatch Logs. A known pattern is to stream log data to a dedicated logs aggregation service, for example, ELK. With this in place, when a function fails you, you can simply query and find the corresponding log for the specific function invocation.

The main problem that is often associated and attributed to logs, is that they have minimal or no context. When our applications become distributed, trying to correlate between logs from different services can be a nightmare. For that particular issue, distributed tracing comes to help.

Tracing

Tracing, or specifically distributed tracing, helps us to correlate between events captured across logs on different services and resources. When being applied correctly we can utilize it to find the root cause of our errors with minimal effort.

Let’s imagine for example that we’ve built a blog site – that has the following public end-points

  1. View an existing post /post
  2. Post a new blog post /new_post

And it consists of these resources and services:

Resources and Services

Now, by using the monitoring and logs methods from before we’ve noticed that there’s an error in our Post Analysis lambda. How do we progress from here and find the root cause of this issue?

Well, in micro-services and serverless applications specifically, we want to be able to collect each trace from our micro-services and gather them together as a whole execution.

In order for us to analyze distributed traces, we need two main things:

  1. A distributed tracing instrumentation library
  2. A distributed tracing engine

When instrumenting our code, the most common approach is to use and implement OpenTracing. OpenTracing is a specification that defines the traces’ structure across different programming languages for distributed tracing. Traces in OpenTracing are defined implicitly by their spans –  an individual unit of work done in a distributed system. Here’s an example of constructing a trace with open-tracing:

span = tracer.start_span(operation_name='our operation')
scope = tracer.scope_manager.activate(span, True)
try:
    # Do things that may trigger a KeyError.
except KeyError as e:
    scope.set_tag('keyerror', f'{e}')
finally:
    scope.finish()

It’s advised to use a common standard across all your services and have a vendor-neutral API. This means you don’t need to make large code changes if you’re switching between different trace engines. There are some downsides in using this approach, for example developers need to maintain span and trace-related declarations across their codebase.

So once instrumented, we can publish and capture those traces into a distributed traces engine. Some of which will result in us being able to visually see an end-to-end transaction within our system. A transaction is basically a story of how data has been transmitted from one end to the other within the system

With an engine such as Jaeger we can view the traces organized in a timeline manner:

Timeline

This way we can try and find the exact time this error happened and therefore find the originating event that caused the error.

By utilizing Epsagon, a purpose-built distributed tracing application which we earlier introduced, we can see that the errored lambda in question has received its input from a malformed lambda (Request Processor) two hops earlier, and that it handled an authentication error and propagated a false input to the Post Analysis lambda via the SNS message broker.

Transaction ID

It’s important to remember that when going serverless we have broken each of our microservices into nano-services. Each of them will have an impact on the other, and attempting to figure out the root cause can be very frustrating.

Epsagon tackles this issue by visualizing the participated elements in the system in a graph, and by presenting trace data directly within each of the nodes, helps to reduce the time involved in investigating the root cause significantly.

Alerts

Last but not least, come the alerts. It’s pretty obvious to everyone that we don’t want to wait in front of a monitor 24/7 to see problems.

Vision

Being able to get alerts to an incident management platform is important, so relevant people will be able to get notified and take action. Popular alerting platforms are PagerDuty, OpsGenie, and even Slack!

When choosing your observability platform, you’ll need to make sure you can configure your alerts based on the type of issue, the involved resource, and the destination (i.e. integrates to the platforms above). For Lambda functions, basic alerts can be configured in CloudWatch Alarms:

CloudWatch

In this example, we want to get notified when we breach the threshold of 10 or more errors within 2 consecutive 5-minute windows. A dedicated monitoring tool has the capability to configure more specific alerts such as:

  1. Alert if the function is timing out (rather than a general error).
  2. Alert on specific business flows KPIs.
  3. Alert regarding the performance degradation of a resource (for example Redis).

Summary

We understand that observability is a broad term, that unifies many important aspects of monitoring and troubleshooting applications (in production, or not).

When going serverless, observability becomes a bottleneck for the development velocity, and a proper tool which is dedicated to the distributed and event-driven nature must be in place.

Epsagon provides an automated approach for monitoring and troubleshooting distributed applications such as serverless and reduces the friction of developers’ from losing the observability to their production.

If you’re curious and would like to learn more about serverless applications, join our webinar on the Best Practices to Monitor and Troubleshoot Serverless Applications next March 7th at 10 am PT.

Best Practices to Monitor and Troubleshoot Serverless Applications
Best Practices to Monitor and Troubleshoot Serverless Applications

The post Introduction to Monitoring Serverless Applications appeared first on Cloud Academy.

]]>
0
Building Security Teams in a Competitive Talent Market: These Are The Droids You’re Looking for https://cloudacademy.com/blog/building-security-teams-in-a-competitive-talent-market/ https://cloudacademy.com/blog/building-security-teams-in-a-competitive-talent-market/#respond Tue, 02 Oct 2018 00:00:53 +0000 https://cloudacademy.com/?p=25776 John Visneski is the Head of Security and DPO at The Pokemon Company International. If you missed the webinar we organized in collaboration with John Visneski you can still watch it on demand, simply click here.  The reasoning behind the popularity of this perspective is clear, if not unique to the...

The post Building Security Teams in a Competitive Talent Market: These Are The Droids You’re Looking for appeared first on Cloud Academy.

]]>
John Visneski is the Head of Security and DPO at The Pokemon Company International. If you missed the webinar we organized in collaboration with John Visneski you can still watch it on demand, simply click here

The reasoning behind the popularity of this perspective is clear, if not unique to the cybersecurity field. Organizations in both the private and public sector are embracing technology in ways that are only limited by the imaginations of their workforce. Cloud computing used to be viewed primarily as a more cost-effective way to conduct IT business. However, organizations are increasingly leveraging the cloud to expand and in some cases fundamentally change their business. The knock-on effect of this is that technology wizards of all shapes and sizes are not just in demand; that demand is now exponential.

In this environment, a paradigm shift is necessary if organizations want to recruit and retain cybersecurity talent. There are far too many hiring managers in search of a purple unicorn that lays golden eggs. In reality, the talent pool is much larger than one would expect.

In order to bridge this perceived gap, consider tailoring your approach to the following:
1. Prioritize attitude and aptitude above all else
2. Find candidates with an operational mindset
3. Avoid binary thinkers, embrace problem solvers

You will notice that none of these suggestions mention security. To hijack and add to the old phrase: it’s the talent economy, stupid. Talent can be measured in many ways and at many levels. The key to building your security team is expanding the aperture of your search.

[bctt tweet=”The key to building your security team is expanding the aperture of your search.” username=”cloudacademy”]

1. Prioritize attitude and aptitude above all else

This won’t be the first article written that references how quickly the technology space is changing, particularly in security. In the same way that organizations are adopting new technology to enable their business or mission, threat actors are leveraging the same technology to prosecute their own agendas. In many cases, these threat actors are much more willing to embrace cutting-edge, innovative technology because the risk of adoption failure is relatively low. A hacker cell in Estonia doesn’t typically report to a CFO on the return on investment for time spent developing or adopting tools to exploit vulnerabilities. For legitimate organizations to keep pace, their security teams need to be willing to adapt and overcome at an incredibly high rate.

This ability to adapt is much easier said than done. It requires talent that has the drive to continue to learn new techniques, tactics, technologies, and integrations. This talent also needs to be ready to throw what they thought they knew out the window should the environment demand it.

To find this talent, try prioritizing attitude and aptitude above the specific technical skill sets you’re looking for. How eager are they to embrace new challenges? What in their background implies that they can adapt to change? Find smart individuals with a positive attitude who will not be discouraged when the problem set changes, and who have the aptitude to continually keep pace with their internal organization and external variables such as changing landscapes and threat actors.

[bctt tweet=”…try prioritizing attitude and aptitude above the specific technical skill sets you’re looking for. How eager are they to embrace new challenges? What in their background implies that they can adapt to change?” username=”cloudacademy”]

2. Find candidates with an operational mindset

Some of the best security professionals in the world didn’t start their careers in the security space. If you took a poll, you’d find that many come from fields like systems administration, infrastructure, DevOps, and quality assurance, while others come from outside technology fields entirely. I started out as a combat communications officer within the United States Air Force.

The common thread with many of these fields both within and outside the technology space is that they possess an operational mindset. To wit, they understand how the sausage is made. The beauty of these talent pools is that they are often the best at understanding how systems fit together and where the gaps and seams are within said systems. An increasing number of these individuals are eager to embrace automation because they’ve seen how it can be a force multiplier for their business. This mindset is focused on business operations.

One of my best security engineers started out as a test and quality assurance engineer. When he applied for the position, his resume had little to no direct security experience to speak of. He did possess a keen mind for automation, an understanding of how systems fit together, a nose for finding gaps and seams within systems, and ideas on fine-tuning these systems to support business operations. He also happened to be a bit of a security whiz in his free time, but that is hardly a concrete bullet to include on a resume. All he needed was someone to take a shot on him, focus his skillset on operationalizing a security program, and provide him the time and resources required to get up to speed. Within no time, he became an Offensive Security Certified Professional and an invaluable asset not just to my team but to our partners in DevOps. I would put him up against some of the very best security engineers in the industry.

3. Avoid binary thinkers, embrace problem solvers

Most security programs still have a very well-earned reputation as the part of the business that tells people what they can’t do, as opposed to helping enable what they can do. Much of this is derived from the tendency for technology professionals to think in terms of what is a ‘right’ answer and what is a ‘wrong’ answer as opposed to thinking in terms of ‘what helps the business be successful.’ The end result is that most of the business stops inviting the security teams to meetings, leading to a decrease in security posture due to a lack of visibility into business process and operations.

The goal is to avoid the perception of security as the “Dr. No” team. Find candidates who are not concerned with what constitutes a ‘right’ answer, but are more concerned with helping the business navigate the gray space between options. These are soft skills, which makes them much harder to teach than it is to send someone to security training. Concentrating on these skills will also help avoid the sort of technology lock-in that limits your search for cloud expertise. Just because you are an Amazon Web Services (AWS) shop, you shouldn’t limit your search to professionals with AWS-centric experience. There are plenty of engineers and operations analysts with deep knowledge in cloud computing that is derived from Microsoft Azure or Google Cloud Platform who can pivot to AWS with ease.

[bctt tweet=”The goal is to avoid the perception of security as the “Dr. No” team. Find candidates who are not concerned with what constitutes a ‘right’ answer, but are more concerned with helping the business navigate the gray space between options. ” username=”cloudacademy”]

The purpose of this post isn’t to say that you shouldn’t hire individuals with deep security experience. They do exist. However, they exist in much smaller numbers than the pool of talent that has many of the attributes that will make them successful members of your security team. These individuals have the ability to solve problems, an operational mindset with an understanding of how systems fit together, and the attitude and aptitude to keep pace with an ever-changing environment. All it takes is for hiring managers to expand the aperture of their search and be willing to invest in their team personally and professionally.

To learn about how to build security teams in a competitive talent market, watch my latest Cloud Academy webinar. In it, I discuss practical strategies to help teams at any level of maturity build out a cloud-focused security practice. You can also check out Cloud Catalog and Cloud Roster, two useful tools to help you close the skills gap within your company.

The post Building Security Teams in a Competitive Talent Market: These Are The Droids You’re Looking for appeared first on Cloud Academy.

]]>
0
Build a Security Culture Within Your Organization https://cloudacademy.com/blog/building-a-security-culture-within-your-organization/ https://cloudacademy.com/blog/building-a-security-culture-within-your-organization/#respond Wed, 01 Aug 2018 00:05:25 +0000 https://cloudacademy.com/?p=24691 At this year’s AWS Summit Sydney, I was invited to speak about security culture and share a few practical examples of how organizations can build a positive security culture through increased visibility and enablement at all levels. But, what is a positive security culture? At Xero, we take a customer-centric...

The post Build a Security Culture Within Your Organization appeared first on Cloud Academy.

]]>
At this year’s AWS Summit Sydney, I was invited to speak about security culture and share a few practical examples of how organizations can build a positive security culture through increased visibility and enablement at all levels. But, what is a positive security culture?

At Xero, we take a customer-centric approach with our product teams. In preparing for my talk, I spoke with another Xero team member who shared his approach to security:

  • If he needs to encrypt at rest, it should be easy.
  • Self-service trumps having to request things from another team, which trumps having to raise a ticket. If it’s too hard, he would do it later.
  • If he needs to patch his instances for vulnerabilities, it has to be easy.

Ultimately, what he wanted was a faster response, fewer tickets, and more enablement for him and his teams. As a principal engineer on one of our product teams, these were now key requirements that he expected my security team to deliver.

Attendees of my AWS Summit presentation went home with four key takeaways, and we will explore them in this post.

Here are the four guiding principles to govern your organization’s security policies:

  1. “Shared responsibility” includes your developers and security partners
  2. Operational visibility is required to embrace DevSecOps
  3. Flexible access management directly helps with the principle of least privilege
  4. Automated compliance (or “Compliance as Code”) is the next big challenge

Let’s drill down into each of these items.

Shared responsibility includes your developers and security partners

It is important to be aware of the shared responsibility model under which public cloud providers operate. This model clearly defines the responsibilities of each party when operating in the public cloud. However, from a practical viewpoint, this model has now been extended even further to include your security partners, who also take on some of the responsibility of safeguarding your digital assets by providing robust controls and visibility. Under DevSecOps, the responsibility is shared even further by entrusting developers to react to security visibility information and to not undermine established security controls.

Within Xero, shared responsibility is a commitment and agreement between our developers, security partners, the public cloud provider itself, and the security team. Increasingly, the security team is not only just the gatekeeper but also the glue that drives and facilitates this extended shared responsibility model between all of these parties.

Operational visibility is required to embrace DevSecOps

When I first deployed security systems into the public cloud more than four years ago, I believed they would largely operate in a similar way to traditional on-premises equivalents. I quickly learned that first I had to understand the different ways these systems would operate.

It is important to make operational visibility design criteria for security implementations. Use systems that provide meaningful real-time alerts, detailed metrics, and powerful dashboards. This way, you can change the mindset of your developers by providing them access to these tools and truly building on the principle that “security is everyone’s responsibility.”

Our site reliability team often uses the phrase, “if you don’t put a metric on it, how do you know what is ‘normal’?” The key takeaway here is that measuring and monitoring everything is critical, especially when building out a new environment in the public cloud.

Flexible access management directly helps with the principle of least privilege

The traditional approach to access control is to implement human gatekeepers tasked with the responsibility of providing access to systems as requested. At face value, this approach makes sense, but it starts to break down when:

  • The process is too cumbersome, leading to a greater amount of access being requested “just in case.” The gate is essentially just left open.
  • Gatekeepers do not understand what they’re protecting, leading to the wrong people getting access and defeating the point of security in the process.
  • The logical divide between resource owners and gatekeepers prevents auditing of access.

Today, development teams work quickly and dynamically. Your security processes need to align with this approach or you are at risk of seeing them become redundant. However, this new, highly dynamic nature can be leveraged to benefit security, particularly around adherence to the “least privilege” principle.

At Xero, identity and access requests once came through in the form of tickets; resolving these was often a manual and time-consuming task. In response, our Identity and Access team developed an internal system called PACMAN to enable our product teams to self-serve their identity and access needs. To make it even easier, we made PACMAN accessible via internal tools, including our intranet and Slack channels.

Via PACMAN, our product teams can query the status of their identities within the Xero ecosystem, reset their passwords for all of these identities, and request access to additional AWS accounts. Access is provided as and when it is needed, and administered by those who own the resources rather than by standalone gatekeepers.

Automated compliance is the next big challenge

Many organizations are subject to compliance obligations such as the European Union’s General Data Protection Regulation (GDPR), the information security standard ISO/IEC 27001, Service Organization Control (SOC 2) reporting standard for data in the cloud, and Payment Card Industry Data Security Standard (PCI-DSS). This is challenging when operating many thousands of computing instances and data stores held in hundreds of accounts within the public cloud. This is where manual approaches to compliance start to break down, much in the same way that manual administration in the cloud does.

The answer is to treat policy compliance as a form of automation in itself. A practical example is to conduct real-time conformity scanning against the Center for Internet Security (CIS) cloud benchmarks instead of performing manual checks or using spreadsheets. CIS baseline scanning can be completed using a variety of available tools, with results communicated directly to product teams who own the affected systems.

This enables Risk and Compliance teams to easily demonstrate that policies are adhered to, and they have the ability to generate such reports at-will. In this age where “everything is code,” compliance is no exception.

So, what did I learn?

Making security components available to developers and engineers can be the biggest positive influence on an organization’s security culture. To be successful in doing this, a security team needs to quickly evolve to become just as agile as the product teams around them. This includes operating based on core principles and communicating effectively and often.

Security visibility and enablement improves awareness and ownership across an entire organization, reinforcing the message that security is everyone’s responsibility. Foster a solid security culture and formally distribute security responsibility to other teams within your organization.

Founded in 2006 in New Zealand, Xero is one of the fastest growing software-as-a-service companies globally, leading the New Zealand, Australian, and United Kingdom cloud accounting markets, employing a world-class team of more than 2,000 people. Forbes identified Xero as the World’s Most Innovative Growth Company in 2014 and 2015.

To learn more about making security tangible, watch our webinar on-demand.

Security is a Journey - Building a Culture of Security

The post Build a Security Culture Within Your Organization appeared first on Cloud Academy.

]]>
0
Cloud Academy Talk: Get Ready for the AWS Security Specialist Certification https://cloudacademy.com/blog/cloud-academy-talk-get-ready-for-the-aws-security-specialist-certification/ https://cloudacademy.com/blog/cloud-academy-talk-get-ready-for-the-aws-security-specialist-certification/#respond Thu, 14 Jun 2018 00:56:54 +0000 https://cloudacademy.com/blog/?p=23874 Security is critical for successful cloud adoption. Many enterprises are willing to invest their money to ensure that data and applications are kept safe, with Forrester reporting a 28% increase in cloud security spending to $3.5 billion by 2021. The new AWS Certified Security – Specialty Exam, released in April,...

The post Cloud Academy Talk: Get Ready for the AWS Security Specialist Certification appeared first on Cloud Academy.

]]>
Security is critical for successful cloud adoption. Many enterprises are willing to invest their money to ensure that data and applications are kept safe, with Forrester reporting a 28% increase in cloud security spending to $3.5 billion by 2021.
The new AWS Certified Security – Specialty Exam, released in April, is an opportunity to for security professionals to validate their skills through certification and for teams to build deep knowledge in key security services that are unique to the AWS platform. Just general knowledge alone isn’t enough to master platform security fundamentals, given the steady stream of new features and updates released every year.

Our Head of Content, Andrew Larkin, and our AWS Lead, Stuart Scott have recently hosted a live session ‘Cloud Academy Talk: AWS Security Specialist Certification and Beyond, sharing preparation tips and strategies to help you master the topics covered in the AWS Certified Security – Specialty Exam.

Watch the recording of the webinar here and share it with your team.

Stuart Scott has also recently published a new Learning Path designed specifically for those looking to gain a deep understanding of AWS security services – the Security – Specialty Certification Preparation for AWS, including the many different security mechanisms and techniques that AWS offers to secure your infrastructure and data from both internal and external threats and exposures.
Security - Specialty Certification Preparation for AWS

 

 

 

 

 

 

 

 

 

 

The post Cloud Academy Talk: Get Ready for the AWS Security Specialist Certification appeared first on Cloud Academy.

]]>
0
GDPR Compliance: Low Cost, Zero-Friction Action Items https://cloudacademy.com/blog/gdpr-compliance-low-cost-zero-friction-action-items/ https://cloudacademy.com/blog/gdpr-compliance-low-cost-zero-friction-action-items/#respond Mon, 26 Mar 2018 00:00:12 +0000 https://cloudacademy.com/blog/?p=23195 George Gerchow is Chief Security Officer at Sumo Logic and Adjunct Honorary Lecturer at Cloud Academy. View the on-demand recording of our recent webinar, Establishing a Privacy Program: GDPR Compliance & Beyond with Mr. Gerchow and Jen Brown, Data Protection Officer at Sumo Logic. In 2016, my Data Protection Officer...

The post GDPR Compliance: Low Cost, Zero-Friction Action Items appeared first on Cloud Academy.

]]>
George Gerchow is Chief Security Officer at Sumo Logic and Adjunct Honorary Lecturer at Cloud Academy. View the on-demand recording of our recent webinar, Establishing a Privacy Program: GDPR Compliance & Beyond with Mr. Gerchow and Jen Brown, Data Protection Officer at Sumo Logic.

GDPR Compliance Action Items
Source: Infotrust

In 2016, my Data Protection Officer and I felt like the InfoSec version of the Northmen in GOT telling everyone “GDPR is coming” while the rest of the world basked in the sun and sort of laughed it off as myth. Throughout 2017, as we took practical steps to ease the pain of the root-canal-that-is-GDPR and continued warning our peers, it seemed like most of the world was still not preparing. So here we are, less than two months away from the “White Walkers” of Privacy and privacy panic just got real.

Organizations are trying to educate themselves and put solutions in place at the same time. Some are over rotating, reading up on all 99 articles, trying to interpret them, and wondering how they will be affected. Others are blasting out 100-page Data Processing Addenda (DPA) at a frantic rate, asking for processors to put their organization’s agility and profit on the line without thinking through what is really needed.
Don’t get me wrong, we are not perfect, but throughout this continuous journey, we have learned a thing or two that might be useful to others. One of the most important is that we should all be transparent and collaborate. In that spirit, I wanted to share a few practical tips on what you should do today to jump-start your journey to GDPR compliance and privacy best practices.

Let me take a step back to 2016 and start from the beginning. When whispers of GDPR started hitting our radar during an EU Privacy Shield assessment, our team started digging into very specific low cost, zero-friction action items that could make this process easier. So, what actions did we take to address GDPR compliance? We decided the first things we needed to do were to:

  1. Appoint a Data Privacy Officer or DPO
  2. Build a ”privacy by design” program
  3. Establish a Data Processing Addendum or DPA
  4. Define and maintain a continuous privacy roadmap

By prioritizing these items and being transparent while showing the best level of effort, we knew GDPR readiness for our organization could be achieved without crippling our business.
Let’s break each one of these down.

Appoint a Data Privacy Officer

Whether you hire, appoint, or use a contract DPO, this might be the most critical action you take towards GDPR goodness. A DPO will guide your organization down the privacy path and help bridge the gaps in knowledge while serving as a one-stop shop for refining process and procedure. Your DPO should work with each business unit to cover data processing and data classification best practices that go well beyond GDPR. Also know that if you do business across the pond, you need to appoint a DPO representative who lives in EMEA. If you employ more than 10 people in Germany, you need a DPO physically present in Germany as well.

Privacy by Design

When you ask any CISO, CSO, or CIO if they care about GDPR, you will get mixed results. If you ask those same individuals if they care about privacy, the answer will be “yes, 100% of the time.” All organizations care about the privacy of data, personally identifiable information (PII), intellectual property (IP), and regulatory requirements. If you build a mature privacy program that accounts for the development and procurement of new services and lines of business, you will have established an operational baseline that addresses the fundamentals of future regulations like GDPR. Moving forward, all you need to do is a gap analysis with risk acceptance and remediation to achieve compliance with the new regulation.

Data Processing Addenda

Most organizations need two: one as a processor, one as a controller. As a processor, this legal document will be your stance on how you handle customer data. It is best practice to be thorough in describing topics like how you process and secure data, subject data rights, transfer of personal data, and identifying sub processors. Be sure to proactively address any foreseeable objections in the DPA. It is key to create your own DPAs — you do not want all of your customer’s or controller’s legal language around data privacy to cripple your organization’s resources with red lines across legal, contract, security, and compliance. As a controller, it doesn’t hurt to have your own DPA on hand, but give the processor’s a shot first before you make the investment. Also, make sure to do deep reviews of the DPA with your Privacy Attorney.

Define and Maintain a Privacy Roadmap

The journey to GDPR Compliance is never-ending, so a continuous roadmap with realistic timelines is a necessity. Items on that roadmap may include the right to erasure, technology requirements like encryption, or data loss protection (DLP). I have seen several forward-thinking companies list a yearly Data Protection Impact Assessment (DPIA) on the roadmap to show third-party validation and gap analysis. It is almost impossible to be 100% compliant 100% of the time. What the auditors want to see is a gap analysis and a roadmap that shows your plan to close or significantly improve your posture on privacy. DPIA’s will be key for your organization to understand the current state and to ease any concerns other organizations may have when evaluating your readiness. We cannot overstate how important transparency and level of best effort will be when it comes to privacy and GDPR. If you are honest with where your organization currently is and have clear and documented steps on how your organization and team will make progress throughout the year, life will be good. In addition to everything we already listed, you might want to read our 4 Best Practices to Get Your Cloud Deployments GDPR Ready.

What’s Next?

Moving forward, we will dive deeper into ways you can streamline GDPR readiness by becoming familiar with key concepts in a “privacy by design” model, which articles matter, what recitals are, and how you can start planning for a DPIA while building a roadmap for the future. GDPR is just the beginning of what’s to come with data privacy regulations. Already, Japan is starting to lead the way in Asia with its “Act on the Protection of Personal Information” (APPI). By sharing knowledge, using many things we have in place, and working together, we can be the “Night’s Watch” of privacy.

If you want to learn more about GDPR, check out our Learning Path on Using AWS Compliance Enabling Services.

The post GDPR Compliance: Low Cost, Zero-Friction Action Items appeared first on Cloud Academy.

]]>
0
“On-Premises” FaaS on Kubernetes: Webinar Recap and Q&A https://cloudacademy.com/blog/faas-on-kubernetes-webinar-recap/ https://cloudacademy.com/blog/faas-on-kubernetes-webinar-recap/#comments Fri, 03 Mar 2017 08:00:19 +0000 https://cloudacademy.com/blog/?p=18526 Last week, I hosted a Cloud Academy webinar about open-source FaaS and FaaS on Kubernetes. If you missed the live event, you can watch it here. In this post, I’d like to briefly recap some of the main topics covered, including a high-level introduction to FaaS and an overview of the main open-source...

The post “On-Premises” FaaS on Kubernetes: Webinar Recap and Q&A appeared first on Cloud Academy.

]]>
Last week, I hosted a Cloud Academy webinar about open-source FaaS and FaaS on Kubernetes. If you missed the live event, you can watch it here.

In this post, I’d like to briefly recap some of the main topics covered, including a high-level introduction to FaaS and an overview of the main open-source projects. Also, I will be answering all of the questions that we received during the webinar, including the ones that I didn’t have time to answer during the live event.

Faas on Kubernetes: Webinar Agenda

My agenda included the following points:

  1. What does FaaS mean?
  2. FaaS in the open-source world
  3. FaaS frameworks for Kubernetes
  4. Pros & Cons of “on-premises” FaaS

Let me explain why “on-premises” is in quotes here: FaaS was born in the public cloud, and here I wanted to explore alternative ways of building your own FaaS platform, either in your own private cloud or on top of the IaaS layer of a public cloud.

While Kubernetes is not the only container orchestration system available, I decided to focus on a couple of open-source FaaS frameworks for Kubernetes that have been announced recently: Funktion and Fission.

What does FaaS mean?

The concept of Function as a Service (FaaS) was born with AWS Lambda, and it kept growing as one of the core components of the serverless movement. I would actually refer to FaaS as the main component. Without the ability to bring your own code and execute it on demand, serverless would just represent brainless outsourcing.

On Wikipedia, FaaS is directly compared with PaaS (Platform as a Service) solutions. This is a crucial point for many skeptics, in my opinion.

It reads:

In most PaaS systems the system continually runs at least one server process and even with auto scaling a number of longer running processes are simply added or removed meaning that scalability is more visible to the developer as a problem.

The first key point is that, even if the underlying architecture is very similar, PaaS developers have to explicitly take care of scalability concerns since, for the most part, the concept of “server” is still part of their daily workflow. With FaaS, you no longer have to think of servers or containers. Understanding how containers are handled under the hood is always useful, of course.
Quoting Wikipedia again:

In a FaaS system, the functions are expected to start within milliseconds in order to allow handling of individual requests. In a PaaS systems, by contrast, there is typically an application thread which keeps running for a long period of time and handles multiple requests.

This is the second key point, which has a direct effect on the corresponding pricing model. Indeed, FaaS will only charge you for the actual execution time of your function, rather than the whole application container life.

Roughly speaking, I identified the following minimum requirements of FaaS:

  • Function as the unit of delivery: Each function should be an independent unit of delivery and deployment with its own configuration and properties (e.g. versions, env. variables, permissions, etc.)
  • Multi-language support & BYOC: You could potentially develop each function with a different programming language. In other words, you can Bring Your Own Code and your programming language of choice as well.
  • Transparent scaling & PAYG: High availability and scalability should be as transparent as possible without any impact whatsoever on the development or deployment process. This should hold both for performance and pricing, meaning that you will automatically pay-as-you-go on a weekly or monthly basis.
  • No infrastructure management: As a direct consequence of the previous point, you shouldn’t have to interact with the underlying infrastructure unless your FaaS platform is running on your “on-premises” cluster and you are responsible for such infrastructure as well. We will discuss the pros and cons of this use case later in this post.

So, how do you deal with FaaS, as a developer? These are the most common concepts and tasks that you need to know:

  • Independent functions development: This is what you’d hopefully do most of the time if the framework/platform allows you to automate a large amount of the following tasks.
  • Versioning & staging (nice-to-have): Unfortunately, not every FaaS platform offers native versioning and staging environments, but I’d say that this is a critical requirement for stable and mature FaaS platforms.
  • Cross-team collaboration: In order to speed up your team, you may need collaboration tools and role-based permissions that are well integrated with your versioning and staging functionalities.
  • Automated workflow: Automation is not always required for FaaS solutions that lack a lot of features such as versioning or events mapping. An automation framework becomes a core component of your stack as soon as you have a complex application built with FaaS.
  • Triggers/Events: The way you invoke functions can vary based on the specific FaaS platform, but you will always need a flexible and stable tool to track down what’s being invoked, how, and when. Things can get very complicated when you need to update such trigger configurations, especially if you haven’t followed all of the recommended best practices.
  • Local unit testing: Unit testing is recommend with FaaS as well as with any other programming stack, although FaaS often requires deep integration with the corresponding cloud environment, and you may need to learn how to mock it properly.
  • Integration tests: Testing every component of a FaaS can be tricky, especially if you consider how many things can’t be tested locally and how many can easily go wrong in a real environment. Ideally, you should have adequate tools to test and identify problems at each step (function code, trigger configuration, deployment process, 3rd-party services, etc.).
  • CI/CD: If every point mentioned above is satisfied, achieving continuous integration and continuous delivery shouldn’t be difficult.

As far as I know, a complete platform does not yet exist, although you can get pretty close by using AWS Lambda and the Serverless Framework. I think all of the other serverless solutions will begin to keep up with the ideal expectations in 2017.

FaaS in the open-source world

Open-sourcing a project is a great way to let the community improve it and to get feedback as soon as possible (and ideally before the project is obsolete with respect to potential “competitors”).
There are three main open-source projects that allow you to build your own FaaS platform:

  • Apache OpenWhisk: Backed by IBM and recently incubated by Apache, OpenWhisk is the FaaS component of IBM Bluemix, although it does not support Kubernetes yet (see the GitHub issue or read more about it here).
  • IronFunctions: The open FaaS component of Iron.io. It supports both Docker and Kubernetes.
  • Funker: A small open-source project based on Docker Swarm, which already supports Node.js, Python, and Go.

I haven’t personally experimented with IronFunctions and Funker, but they both seem very promising. On the other hand, OpenWhisk is much more mature and comes with dev-friendly tools such as the Serverless Framework OpenWhisk Plugin.

FaaS on Kubernetes: FaaS frameworks for Kubernetes

A few FaaS solutions for Kubernetes have been announced and shared recently, thanks to several companies and individual contributors active in the Kubernetes community.
I explored three of these solutions during the webinar:

  • Kubeless: A proof-of-concept developed at Skippbox based on Apache Zookeeper and Apache Kafka. It supports both HTTP and PubSub as function triggers.
  • Funktion: Developed by fabric8 and backed by Red Hat. It seems to be well integrated with fabric8’s Developer Platform, and it supports multiple languages such as JavaScript, Python, Ruby, Java, etc.
  • Fission: As the newest of these solutions, it was designed to be extensible and fast, thanks to reduced cold starts. For now, it only supports HTTP triggers and Node.js/Python.

Interestingly, each of these are written in Go and offer a very similar approach from the function development point of view. Funktion and Fission adopted the very same function interface, which is also quite similar to AWS Lambda.

Pros & Cons of “on-premises” FaaS

Why would you decide to build your own FaaS platform? I think there are a few reasonable benefits, as well as strong limitations. As usual, it’s a tough trade-off between freedom, features, costs, and complexity. This holds with or without Kubernetes, in my opinion.
Here are the potential benefits of building your own FaaS platform:

  • You will achieve the same abstraction level for developers while using open-source technologies instead of proprietary black boxes.
  • You will have much more control over infrastructure and therefore the flexibility to cover more scenarios with fewer constraints.
  • Since you own the infrastructure, you may avoid a few non-functional limitations such as maximum timeouts, unmodifiable machine configurations, etc.
  • The overall solution might be cheaper and faster, although you should also take operational costs into consideration.

In terms of the cons, I see two. First of all, many of the nice-to-have features of FaaS are not currently available out of the public cloud, such as versioning, staging, environment variables, per-function monitoring and logging, advanced permissions and orchestration, and native triggers for databases, streams, object storage, etc. Secondly, the possibility to have more control over the underlying infrastructure may turn out to be a nightmare and not as cost effective as a public cloud solution, given the high operational complexity.

Faas on Kubernetes: Webinar Q&A

Here is the complete list of questions that we received during the webinar.

Q: What is the relationship between functions and APIs? I guess we’d want publicly facing APIs that forward to internal functions?

Functions don’t necessarily correspond to RESTful APIs. Some of your functions may respond to events in your system such as newly uploaded objects, updated database records, incoming streaming data, etc. In the case of APIs backed by functions, you need publicly facing endpoints that will forward requests to your function and return its results back as HTTP responses. This is exactly what most FaaS frameworks do, as well as PubSub mechanisms that allow you to implement more complex and event-driven logic.

Q: Apart from AWS Lambda, are there other FaaS platforms?

The most well known public FaaS platforms are AWS Lambda, Azure Functions, Google Cloud Functions, IBM Bluemix, OpenWhisk, Webtask.io, Iron.io, weblab.io, stdlib.com, UnitCluster, and many more still in preview or stealth mode.

Q: I understand that you probably don’t want to be biased, but for newbies, it would be nice to have a recommendation for which one to start with.

Although some of them have a corresponding commercial service, they are all open-source projects worth exploring. OpenWhisk seems to be the one that is the most mature and supported, even if it doesn’t support Kubernetes yet. Regarding the three Kubernetes frameworks I presented, I would start from Fission, which seems to be the most promising for now.

Q: Are you seeing an increase in the industry in terms of moving from PaaS toward FaaS?

Yes, definitely. FaaS and serverless are quickly changing how developers want to build quickly and efficiently. As with every technology out there, it’s not the perfect choice for every scenario, but it can drastically simplify your life and development workflow when you need to build microservices, RESTful APIs, and highly scalable event-driven applications.

Q: Which of the most mature frameworks make the debugging process easier?

I am not aware of any easy debugging strategy for FaaS at the moment. This is probably the hardest thing to do when you start having 10+ functions that react to different events and sources. AWS is actively working to solve the problem in-house with AWS X-Ray (still in preview). Other players such as the Serverless Framework allow you to fetch function logs and inspect them locally, which is more comfortable, but still not easy. Other services and startups – for example, IOpipe – are working hard on the same problem, which involves both debugging and monitoring.

Q: Cloud service, then microservice, then container, then function. So, what is the future of function?

That’s a very good question. I am not sure if we can achieve a more granular unit of delivery than individual functions. Indeed, FaaS might be the ultimate model from the development point of view, although there is plenty of margin for improvement, especially on the underlying layer. The future of FaaS – and development in general – might go from serverless to codeless altogether. That’s a future where everything has already been implemented by someone else, and you just have to put all of the pieces together. At that point, I’m afraid that machines will have already taken over! 🙂
Let us know what you think of FaaS and what you’ve built with it. You can watch the FaaS on Kubernetes webinar here. Don’t forget to watch our webinars if you want to learn more about Kubernetes and Docker.

The post “On-Premises” FaaS on Kubernetes: Webinar Recap and Q&A appeared first on Cloud Academy.

]]>
1