Cloud Foundry | Cloud Academy Blog https://cloudacademy.com/blog/category/cloud-foundry/ Thu, 29 Sep 2022 14:52:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 What is Cloud Foundry? Key Benefits and a Real Use Case https://cloudacademy.com/blog/cloud-foundry-benefits/ https://cloudacademy.com/blog/cloud-foundry-benefits/#comments Sat, 06 Jul 2019 08:09:33 +0000 https://cloudacademy.com/blog/?p=8727 Unlike most other cloud computing platform services — which are tied to particular cloud providers — Cloud Foundry is available as a stand-alone software package. If desired, you can deploy it on AWS, but you can also host it yourself on your own OpenStack server, or through HP Helion or...

The post What is Cloud Foundry? Key Benefits and a Real Use Case appeared first on Cloud Academy.

]]>
Unlike most other cloud computing platform services — which are tied to particular cloud providers — Cloud Foundry is available as a stand-alone software package. If desired, you can deploy it on AWS, but you can also host it yourself on your own OpenStack server, or through HP Helion or VMware vSphere.

In this article, we’ll cover the basics of Cloud Foundry, but you can learn more about similar DevOps principles using Cloud Academy’s DevOps Playbook Learning Path.

DevOps Learning Path

What is a cloud computing platform?

Broadly speaking, three major categories of cloud computing:

  • Infrastructure-as-a-Service (IaaS), which provides only a base infrastructure, leaving the end user responsible for the platform and environment configurations necessary to deploy applications. Amazon Web Services and Microsoft Azure are prime examples of IaaS.
  • Software-as-a-Service (SaaS), which provide a finished product for end users, such as Gmail or Salesforce.
  • Platform-as-a-Service (PaaS), which helps to reduce application development overhead (i.e. environment configuration) by providing a ready-to-use platform. PaaS services can be hosted on top of infrastructure provided by an IaaS.

The three categories of cloud computing platforms: IaaS, PaaS, and SaaS

Since it’s easy to become a bit confused when thinking about cloud platforms, it’s important to be able to visualize exactly which elements of the compute ecosystem are which party’s responsibilities. While there is no precise definition, it’s reasonable to say that a platform requires only that you take care of your applications.

With that in mind, the platform layer should be able to provide the following features:

  • A suitable environment to run an application.
  • Application life cycle management.
  • Self-healing capacity.
  • Centralized management of applications.
  • A distributed environment.
  • Easy integration.
  • Easy maintenance (such as upgrades, patches, etc.).

What is Cloud Foundry?

Cloud Foundry is an open source cloud computing platform originally developed in-house at VMware. It is now owned by Pivotal Software, which is a joint venture made up of VMware,  EMC, and General Electric.

Cloud Foundry is optimized to deliver:

  • Fast application development and deployment.
  • Highly scalable and available architecture.
  • DevOps-friendly workflows.
  • A reduced chance of human error.
  • Multi-tenant compute efficiencies.

Not only can Cloud Foundry lighten developer workloads, but because Cloud Foundry handles so much of an application’s resource management it can also greatly reduce the overhead burden on your operations team, freeing your resources for development.

Cloud Foundry’s architectural structure includes components and a high-enough level of interoperability to permit:

  • Integration with development tools.
  • Application deployment.
  • Application lifecycle management.
  • Integration with various cloud providers.
  • Application execution.

For more information, read Cloud Foundry:Understanding the Core Components .

Although Cloud Foundry supports many languages and frameworks, including Java, js, Go, PHP, Python, and Ruby, not all applications will make a good fit. As with all modern software applications, your project should attempt to follow the Twelve-Factor App standards.

What are the key benefits of Cloud Foundry?

  • Application portability.
  • Application auto-scaling.
  • Centralized platform administration.
  • Centralized logging.
  • Dynamic routing.
  • Application health management.
  • Integration with external logging components like Elasticsearch and Logstash.
  • Role-based access for deployed applications.
  • Provision for vertical and horizontal scaling.
  • Infrastructure security.
  • Support for various IaaS providers.

Getting started with Cloud Foundry

Before deciding whether Cloud Foundry is for you, you’ll have to try actually deploying a real application. As already mentioned, to set up a suitable environment you will first need an infrastructure layer. As of June 2019, Cloud Foundry runs on AWS, Azure, Google Compute Platform (GCP), OpenStack, VMware vSphere, SoftLayer, and others. For a first step, we’ll look at some basics of setting up Cloud Foundry with Pivotal Web Services (PWS).

PWS provides “Cloud Foundry as a web service,” deployed on top of AWS. You’ll just need to create an account and you’ll automatically get a free trial. Getting started isn’t a big deal at all.

Hosting static files in Cloud Foundry

Once you’ve created your account and set up the command line interface tool (CLI), you’ll be ready to deploy your application. We’re going to use some static files, which means we’ll need one folder and a few html files. Make sure there’s an index.html file among them.

Normally, deploying static files requires a webserver like Apache or Nginx. But we’re not going to have to worry about that — the platform will automatically take care of any internet-facing configuration we’ll need. You only need to push your application files to the Cloud Foundry environment and everything else will be taken care of.

Now, copy the folder with your files to the machine where you’ve installed the CLI and log in to the CLI using this API endpoint:

cf login -a https://api.run.pivotal.io

You may need to provide some information:

  1. Username (the username you used to log in to your PWS account).
  2. Password (the PWS password you created).
  3. Organization name (any name will work).
  4. Space (select any space where you want your application to be deployed).

Once successfully logged in you will be able to run Cloud Foundry commands using cf. Start with cf help to check all available commands.

cf help

Now go to your application folder and create a file called Staticfile using:

touch Staticfile

Push the application using:

cf push <<application file name>>

You will want to verify that the app is running by using the URL generated from the output of the previous step. There’s more information on hosting a sample static file in Cloud Foundry.

That’s it! You’ve gotten started with Cloud Foundry and can continue to explore how it frees you and your teams from infrastructure overhead.

The post What is Cloud Foundry? Key Benefits and a Real Use Case appeared first on Cloud Academy.

]]>
9
Hands-on AWS Serverless Application Model https://cloudacademy.com/blog/hands-on-aws-serverless-application-model/ https://cloudacademy.com/blog/hands-on-aws-serverless-application-model/#comments Thu, 12 Jan 2017 00:19:58 +0000 https://cloudacademy.com/blog/?p=17534 The November 2016 AWS re:Invent brought us a variety of awesome new tools, products, and services.   Amazon Athena, Amazon QuickSight, Amazon EC2 F1, I3, R4 Instance types, AWS Glue, Amazon Lex, MXNet, Amazon Lightsail, Amazon X-ray are just a few of the long list of tools announced at the re:Invent show....

The post Hands-on AWS Serverless Application Model appeared first on Cloud Academy.

]]>
The November 2016 AWS re:Invent brought us a variety of awesome new tools, products, and services.   Amazon Athena, Amazon QuickSight, Amazon EC2 F1, I3, R4 Instance types, AWS Glue, Amazon Lex, MXNet, Amazon Lightsail, Amazon X-ray are just a few of the long list of tools announced at the re:Invent show. Digging a little deeper into these new tools, I found one that I have been waiting for: AWS Serverless Application Model, or AWS SAM, previously known as Project Flourish.

In the past, I’ve worked with similar tools or frameworks to have my infrastructures as code and, of course, versioned. Serverless Framework, Apex, Terraform, and Chalice are just a few of the ones I’ve tried, and I found them all useful for simplifying the creation, deployment, and management of AWS Lambdas and other resources like API Gateway, DynamoDB, etc.
Let’s talk a little more about the AWS Serverless Application Model and then we can dive into an example of how to use it.

What is the AWS Serverless Application Model?

The AWS Serverless Application Model allows you to describe or define your serverless applications, including resources, in an easier way, using AWS CloudFormation syntax. In other words, AWS SAM is a CloudFormation extension optimized for serverless applications. It supports anything that CloudFormation supports. You can use YAML or JSON syntax to write your templates.

Clear enough? Maybe not. To understand it better, let’s look at a very simple example.

AWS SAM: Hello World

Let’s start with the first well-known application: Hello World.

I will use an existing blueprint that the AWS Lambda console provides: lambda-canary.

Follow these steps to download the blueprint:

  • Open your AWS Lambda Console.
  • Click on Create a Lambda Function.
  • Select “Python 2.7” in the “Select runtime” list.
  • Click the little icon located on the bottom right. This will open an “Export blueprint” dialog box:
Lambda canary
  • Click on “Download blueprint” button.

You will have now a lambda-canary.zip with 2 files inside:

  • lambda_function.py
  • template.yaml

Unzip it in a directory somewhere in your computer, and let’s examine each one.

The first one is a simple Lambda function that will check for a string within a site. Go ahead and take a look at it. You will find another interesting and new feature in AWS Lambda: Environment Variables. In Python we use environment variables by importing first the “os” package and then using them in the following way:

import os
SITE = os.environ['site']
EXPECTED = os.environ['expected']

This is very useful when we have different environments like development, testing, production, etc. We won’t discuss them in depth right now, but you can check it out here: AWS Lambda Supports Environment Variables.

The second one is our blueprint or template, and this is where things get interesting:

AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: 'Performs a periodic check of the given site, erroring out on test failure.'
Resources:
  lambdacanary:
    Type: 'AWS::Serverless::Function'
    Properties:
      Handler: lambda_function.lambda_handler
      Runtime: python2.7
      CodeUri: .
      Description: >-
        Performs a periodic check of the given site, erroring out on test
        failure.
      MemorySize: 128
      Timeout: 10
      Events:
        Schedule1:
          Type: Schedule
          Properties:
            Schedule: rate(1 minute)
      Environment:
        Variables:
          site: 'https://www.amazon.com/'
          expected: Online Shopping

As you can see, the syntax is pretty much the CloudFormation syntax that we already know, but with a slight difference:
Transform: 'AWS::Serverless-2016-10-31'

This line is used to include resources defined by AWS SAM inside a CloudFormation template.

Note also the part where the properties of our Lambda function are:

  • Handler: This is the method that will be called when our Lambda function runs.
  • Runtime: The programming language in which our Lambda function is written.
  • Description: A brief text that describes what our function does.
  • MemorySize: How much memory our function will use.
  • Events: This describes what will trigger our function. In this case, we have a CloudWatch Scheduled event, and it will run every minute.
  • Environment: The environment variables that our function will use.
  • Timeout: The time required for the function to finish and mark it as failed.
  • CodeUri: Where our Lambda function and libraries are located… wait, a dot?

Wait, what is that dot in CodeUri?! For now, let’s keep it that way. (We’ll come back to this later in the post.)

So, now that we have our blueprint and our Lambda function, what’s next?

AWS Serverless Application Model: Package and Deploy

The first thing to do is to create a package. This package will be uploaded to an Amazon S3 bucket, and from there we are going to deploy it.

But before going further in our guide, let’s make a little change in our template:

  • Open the template.yaml file with your favorite text editor.
  • Go to line 24, where the ‘expected’ environment variable is, and change the value from ‘Online Shopping’ to ‘About Amazon’.

We have to make that change because the ‘Online Shopping’ string does not appear anywhere on ‘https://www.amazon.com’. Therefore,  our check is going to fail every time.

You can change the site and the expected string to your own website if you’d like.

Now, we can continue…

You have the AWS CLI already installed, right? No? Ok, no worries. Let’s do it right now. Go to AWS Command Line Interface, and follow the instructions for installing it.

Or follow these easy steps:

If you haven’t done it yet, you need to generate your Access key ID and Secret access key from the Identity and Access Management Console. Download them and store them in a safe place.

If you are using Linux, you can run the following command to have the AWS CLI ready for action (you need Python Pip installed first though):

pip install awscli

Or, if you are using MacOS, you have the alternative to install the CLI via Homebrew:

brew install awscli

If you use Windows, you have to download the 64-bit or the 32-bit installer.
Finally, run the following command to configure the AWS CLI:

aws configure

You will be asked for:

AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:

Please fill each field with the correct values.

Now we are ready to package and deploy!

Let’s do this!

aws cloudformation package \
    --template-file template.yaml \
    --output-template-file output.yaml \
    --s3-bucket example-bucket

We just created a new CloudFormation template, output.yaml, with the URI of our code to have it ready for deployment. This is actually the only difference between our original template and the one generated by the package command.
The output will be something like this:

Uploading to cdaf78f67aefd46edeac3ceae77124ed  349733 / 349733.0  (100.00%)
Successfully packaged artifacts and wrote output template to file output.yaml.
Execute the following command to deploy the packaged template
aws cloudformation deploy --template-file /Users/gsantovena/Projects/SAM/output.yaml --stack-name

This is very useful when we are implementing continuous integration and continuous delivery. The package command will zip our code, upload it to S3, and add the correct CodeUri property to the output template. This is why the CodeUri is empty, or is set to a dot, in our original template. Each time we run the package command, it will generate a different ID for our package and will fill the CodeUri field with the correct address. In this way, we can deploy new versions of our Lambda function or roll back to a previous version. Go ahead and open the AWS S3 console and see it with your own eyes.

Now, we are ready to deploy it. Run the following command:

aws cloudformation deploy \
    --template-file output.yaml \
    --stack-name MyHTTPMonitor \
    --capabilities CAPABILITY_IAM

The output will be something like this:

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - MyHTTPMonitor

This command will create or update a stack named MyHTTPMonitor with our serverless application based on the template we just created.

Let’s check our AWS CloudFormation console:
AWS CloudFormation console
Also, take a look at the Template tab of our stack. You will see two new things:

  • View original template
  • View processed template

The first one is obvious, it is our original template, the one we deployed. So, no changes here.

The second one is the template that was created by transforming our original template, by explicitly adding all of the resources needed for our function to work properly.

Click on that radio button and take a look at it. You will see the definition of: AWS::Lambda::Permission, AWS::Events::Rule, AWS::IAM::Role, and of course, AWS::Lambda::Function:

Original template:
AWS CloudFormation console
Processed template:
AWS CloudFormation console
Now, let’s check our AWS Lambda Functions console:
aws-sam-lambda-function
We are adding the --capabilities CAPABILITY_IAM parameter because the AWS::Serverless::Function resource is going to create a role with the permissions to execute our Lambda function, and we want CloudFormation to be able to create it for us.

Easy, huh? We have our CloudFormation stack with all the artifacts needed by our Lambda function.

Now, let’s change something in our original template, re-package it, and re-deploy it to see what happens:

First, open the template.yaml and change the site environment variable at line 23 to 'http://www.example.com' and the expected environment variable at line 24 to 'Example Domain'. Again, you can set those variables to anything you want, just be sure that the expected string is found in the site you are using.

Now, run the package command again. (You can change the name of the output template file if you want to see that the new CodeUri is different):

aws cloudformation package \
    --template-file template.yaml \
    --output-template-file output.yaml \
    --s3-bucket example-bucket

If you go to the AWS S3 console now, you will see the new package there along with the first one we already deployed.
And then the deploy command:

aws cloudformation deploy \
    --template-file output.yaml \
    --stack-name MyHTTPMonitor \
    --capabilities CAPABILITY_IAM

Remember to put the correct filename if you changed it in the package step.
Let’s check our AWS CloudFormation console one more time:
aws-sam-cf-updated
Let’s check our AWS Lambda Function environment variables:
aws-sam-lambda-vars
Let’s check the CloudWatch logs for our lambda function (Here you can troubleshoot your Lambda functions too):
aws-sam-cw-logs
As you can see, the stack was updated, the environment variables were updated, and the CloudWatch logs are showing that our function is checking the new site we set.

What’s next?

Automate everything!

You have passed the first 4 steps of something called Application Lifecycle Management:

  • Author
  • Package
  • Test
  • Deploy

However, you have passed them manually. Always doing this by hand is not a good idea.

What about making more changes to our code or our template? We would need to package it again, test it again, and finally, deploy it again.

These steps can be, and should be, automated. So, the next step would be to implement continuous integration and continuous delivery, or CI/CD.

First, let’s look at our requirements:

  • VCS: The first thing to do is version our code using any Version Control System we are comfortable with. In my case, I prefer to use Git:
  • CI/CD: Next, we have to use a Continuous Integration and Continuous Delivery tool. There are many we can use, including:

Basically, we are going to configure our CI/CD tool in such a way that when our code is updated by someone, or by us, it will trigger a compilation of it and generate a build. If the build is successful, then it will be deployed to a test environment. It will run some tests, and if it passes all of the tests, then it will be deployed to our production environment.
As you can see, you can use Amazon Web Services for all the steps I just described!

Implementing all of this, and integrating what we just learned about the AWS Serverless Application Model into this flow is not that difficult. We will learn how to do it in an upcoming post.

And, if you haven’t already read about it, there is another cool and useful new service by Amazon Web Services: AWS CodeBuild. This can be integrated into our CI/CD flow.

Conclusion

Creating and deploying AWS Lambda functions is now easier than ever! With the AWS Serverless Application Model, you will be creating, deploying, and managing your Lambda functions in no time. Check out the documentation for more information.

The post Hands-on AWS Serverless Application Model appeared first on Cloud Academy.

]]>
1
Cloud Computing Impact on Business https://cloudacademy.com/blog/cloud-computing-impact-on-business/ https://cloudacademy.com/blog/cloud-computing-impact-on-business/#comments Wed, 05 Oct 2016 08:08:48 +0000 https://cloudacademy.com/blog/?p=16068 Cloud computing impact on business: deciding to move your business to the cloud is not the end of the journey, but rather the beginning. While the focus tends to be on the period of migration, the Cloud Computing impact has ripple effects on internal business operations and processes. It’s important...

The post Cloud Computing Impact on Business appeared first on Cloud Academy.

]]>
Cloud computing impact on business: deciding to move your business to the cloud is not the end of the journey, but rather the beginning. While the focus tends to be on the period of migration, the Cloud Computing impact has ripple effects on internal business operations and processes. It’s important not to overlook these ongoing changes as you plan for the “end goal” of moving to the cloud.

As with any change to your organization, your business dynamics and processes will be affected in a number of ways you may not have seen coming. And as with any change, there are likely to be unexpected consequences that will affect how much internal effort an organization will need to devote to the transition. I’ll highlight some of those here, and go into more detail about this in our Cloud Academy course on the Internal Business Effects of the Cloud.

Cloud computing impact on human resources

With any internal business change, it’s necessary to consider not only the cloud computing impact on business but also on your most important resource: people. Although the move to the cloud is a technical one, it has a major human resources component to it. You need to consider whether your teams will have the skills necessary to run the new environment, and how you will manage those that aren’t. Transitions are always delicate, and this one is no exception, so making sure you can handle the changes with care will be an important component in your transition to the cloud.

Cloud computing financial implications


Then there are the financial implications of cloud computing. While there is definitely potential for cost savings, the cloud computing impact on the budget will be consistent, it will come with some unknowns, and can be more complex than just considering implementation and the ROI. Consider, for instance, what we just discussed, related to possible changes or training for personnel. Understanding the landscape of cloud costs, and how this will differ from managing your data centers on premises, the Cloud Computing impact will change how your organization sees the success of the migration in the mid-to-long-term.

You can’t make a change without taking on some level of risk, and being prepared to handle and mitigate the legal, strategic, and security risks inherent in migration to the cloud will be key to your success.  
To put some additional context around what I have discussed so far, let’s introduce a scenario where a fictional business ‘Telco-Corp-UK’ is looking to utilize the Cloud to host some of their customers’ services to see how this affects their internal operations.

A Cloud computing impact scenario

A large TelCo company, ‘Telco-Corp-UK,’ has made the strategic decision to migrate its customer managed services from on-premise to Amazon Web Services within 6 months.  This infrastructure is currently comprised of a number of networks, databases, applications and security services.  The team has skill sets primarily around Virtualisation, Microsoft applications, and minimal Linux and programming language scope. The team is now AWS certified or experienced with Cloud Technology, and some employees are reluctant to change their career direction.

Also, Telco-Corp-UK has recently landed a contract to manage a health care providers infrastructure which has to meet HIPPA compliance.  This level of governance has not been required with any other customer and is new territory for the supporting teams.

So let’s break this scenario down as to how this move would affect Telco-Corp-UK within this scenario.

Let’s start with some of the business dynamics.  The Sales team would be able to adopt a new approach, become increasingly diverse and target customers that were previously out of reach and scope due to the restricted amount of available resources on-premise to support the new customers.  Now that Telco-Corp-UK is hosting the customers within the Cloud, there is no longer this restriction, and so larger customers can be targeted all over the globe at an increased frequency.

Cloud Computing impact
Responsibilities internally will also change. For example, the Data Centre Manager will no longer have full visibility on the entire estate of infrastructure within Telco-Corp-UK.  This will now be a shared responsibility between the DC Manager and a Cloud Operations Manager (which is a role that will need to be defined, provisioned and recruited for).  

Telco-Corp-UK has essentially handed over a level of responsibility to the Cloud vendor who will now be responsible for the security, maintenance, power and cooling of the host hardware in the Cloud within the vendor’s Data Center.

Business processes and procedures will need to be modified for almost every department as a change of this scale is likely to affect Service Delivery, Finance, Support, Development & Testing, Storage & Backup, Networks, Sales, Risk, Operations and even Human Resources.  That is a lot of administration to take care of within each department.  However, this rework of process and procedure should refine the method and help optimize their daily operations.

Cloud Computing impact: where to start
Within this scenario, the deployment and operations team methods will be revolutionized compared to their current strategy on-premise.  This is the team on the ground deploying the new resources and infrastructure within the Cloud. The time frame alone to deploy new services, applications, and customers will dramatically reduce thank to on-demand resourcing playing a huge factor in this. If they then couple this with Cloud automation, auto-scaling, or even serverless Computing then Telco-Corp-UK could achieve some great optimization within their deployment process.

Next, if we look at some of the employee implications within this scenario we can safely assume that there will be many new opportunities for new cloud-based roles for those wishing to advance or change their career. An entirely new team will need to be created to manage this new architecture and environment, however, we also need to bear in mind there will still be local internal services that will continue to be managed on-premise.

We know that the skill set of the team has some huge gaps if it intends to achieve the end goal.  Training programs to upskill your existing workforce will be required, ideally as AWS Solution Architects, SysOps Administrators, and DevOps Administrators.  This will allow the team to design, architect, deploy and manage network segmentation, database migrations, application deployment, and security services within the Cloud.

There will also be a requirement to develop the level of programming capabilities within the team.

Languages such as Python, Chef, and Puppet would be great to have within Telco-Corp-UK to make use of powerful API infrastructure across the AWS services all aiding towards improvements of application deployment services.

Cloud Computing impact on training

Due to time constraints, there will be a need to bring in additional external expertise to kick off the project.
However, once operational and going forward, additional personnel could be recruited from almost anywhere across the globe to manage the Cloud environment as it can all be accessed remotely.

The requirement to be HIPAA compliant for the new Health Care provider will definitely require a Cloud security specialist and will need to source outside of the organization.  They will require a deep understanding of Cloud Security at all levels and a solid awareness of security governance and compliance programs.  

Adopting the Cloud allows Telco-Corp-UK to refine their existing procedures, processes, deployment methods and administration maintenance.  This refinement and optimization result in a cost saving when compared to how the same services were delivered on-premise.

Being a large Telco company they are bound by many different Service Level Agreements (SLAs) by different customers. Moving their infrastructure to the Cloud Telco-Corp-UK will need to ensure the AWS SLAs for all the required Services in each Region they need to align with their existing customer SLAs.  Any gaps found in the SLA of AWS and the customer will need to be defined and a solution as to how to close that gap will need to be implemented.  This could perhaps mean negotiation with AWS or by implementing additional resiliency into the service.  This level of contractual obligation can be a difficult step to overcome for some organizations.

All of this training and recruitment can be expensive and must be catered for within your cloud budgets.

For those employees who do not wish to change career and develop their skills within the Cloud environment then Telco-Corp-UK could offer additional training within their current area of expertise to help improve and maintain any existing on-premise architecture that will need to remain.

This will help to maintain morale levels and demonstrate that you still have all employees interests in mind.
Moving onto how some of the financial impacts change, we can already see that there will be some HR costs associated to training, new positions, and any potential redundancies.

One major change for Telco-Corp-UK would be the amount of Capex required going forward, they will see a sudden drop as they will no longer be purchasing as much hardware for on-premise.  Instead, they will see a steady rise in the Opex costs for maintaining the Cloud infrastructure.

Billing of services to customers will be greatly enhanced and easier to manage in a more streamlined fashion.  Telco-Corp-UK will be able to easily identify exactly which customers have consumed which resource, this allows them to potentially increase their margins, or at the same time pass on additional cost savings to their existing customers which will help with customer retention.  The Finance team could have direct access to the AWS Console allowing them to pull out billing reports for each customer independently and help refine where additional savings could be made.

Within our Scenario, Telco-Corp-UK needs to pay particular attention to the HIPPA Compliance which is required by their new customer.  All services relating to this healthcare provider will need to meet strict regulations and must be audited to confirm its readiness.  This may result in different deployment methods or storage services for this one customer.  AWS is HIPPA Compliant already, but there are additional steps that Telco-Corp-UK would need to administer to be fully compliant with regards to their data they are storing.  They can’t be too complacent assuming that AWS is compliant that all their data and services being used are compliant too, that’s not how it works.

Let’s take a quick look at some of the risk Telco-Corp-UK are facing by moving to the Cloud.  They are becoming reliant on a 3rd and by doing so they are losing some control of their business.  Some key questions need to be answered, including how will AWS handle outages? What if there are failures within AWS that take down an entire region? Has Telco-Corp-UK architected their infrastructure across a multi-region design to allow for high availability?

There is a risk that the decision to migrate to the Cloud may not have been the right strategy. After 6 months of operations, Telco-Corp-UK may find that the cost savings and optimization are not aligning to the figures that were initially expected and as a result, they are now losing revenue.   How did this failure occur and what should be their next step?

The Cloud technology itself could also be an issue, perhaps it’s too new for some legacy applications to operate and run effectively on.  The applications might not be cloud-ready and therefore will fail to make use of the key characteristics of the Cloud such as scalability and flexibility.  Major development could be required to decouple the application and resolve any issues.

Within this simple and short scenario, you can see that there are many points to consider, and I have only touched on a few within each area.  For further information and in-depth answers to some of these questions surrounding these topics, take a look and my latest course ‘Internal Business Effects of the Cloud’.  Throughout this course we analyze numerous effects going into depth in different areas and how you can use these to benefit the business, and how you can mitigate unwanted risks.

If you are not in a position to understanding if the Cloud is right for your business or not, then take a look at the course ‘Should Your Business Move to the Cloud’ where we look at additional business challenges, benefits and constraints.  This course has been followed up by a Webinar where we also discussed ‘Can the Cloud be Right for Your Business as a Strategy?
These courses focus on Cloud Computing impact from the business perspective rather than the technical capabilities that many of our other courses provide.

The post Cloud Computing Impact on Business appeared first on Cloud Academy.

]]>
1
Pivotal Cloud Foundry: Set Up a Backup Strategy https://cloudacademy.com/blog/pivotal-cloud-foundry-backup/ https://cloudacademy.com/blog/pivotal-cloud-foundry-backup/#comments Wed, 04 Nov 2015 09:01:24 +0000 https://cloudacademy.com/blog/?p=9384 Pivotal Cloud Foundry deployments can be complicated. Learn how to properly create and restore from backups. Pivotal Cloud Foundry (PCF) is an open source platform based on Cloud Foundry and offered as a collaboration between Pivotal, EMC, and GE. Pivotal Cloud Foundry runs on almost all popular cloud infrastructures, including...

The post Pivotal Cloud Foundry: Set Up a Backup Strategy appeared first on Cloud Academy.

]]>
Pivotal Cloud Foundry deployments can be complicated. Learn how to properly create and restore from backups.

Pivotal Cloud Foundry (PCF) is an open source platform based on Cloud Foundry and offered as a collaboration between Pivotal, EMC, and GE. Pivotal Cloud Foundry runs on almost all popular cloud infrastructures, including VMWare, AWS, and OpenStack. PCF as a platform is dynamic, developer friendly, and features full-lifecycle support.

Organizations implementing Pivotal Cloud Foundry as their cloud platform free themselves from managing application infrastructure. When integrating freely available third-party tools and services, they can also achieve high-availability, auto-scaling, dynamic routing, multi-lingual support, and log analysis.

Pivotal Cloud Foundry performs exceedingly well when intelligently designed and maintained, but there are still some time-consuming tasks that demand an admin’s attention. One such operational task is ensuring that installation settings and essential internal databases are regularly backed up. Pivotal recommends that you back up your installation settings by exporting them at regular intervals (weekly, bi-weekly, monthly, etc). We’re going to discuss designing an effective and reliable back up process…and how to apply an archive when you need to restore your installation.

Note: According to Pivotal Cloud Foundry documentation, exporting your installation only backs up your installation settings. It does not back up your VMs or any external MySQL databases that you might have configured on the Ops Manager Director Config page.

Pivotal Cloud Foundry: prerequisites

Before jumping in, it’s a good idea to make sure that you’ve covered all the prerequisites you’ll need to make Pivotal Cloud Foundry happy. You’ll need:

  • Sufficient space in your workstation to store backup data from Pivotal.
  • Administration credentials of the existing Ops Manager to log in to console.
  • Communication coordinates of users who may be affected by this upgrade.
  • Pivotal Cloud Foundry support details if you have subscribed.

Pivotal Cloud Foundry: a backup strategy

Backing up a Pivotal installation is critical for the operation and availability of your Pivotal Cloud Foundry data center. Backing up Pivotal Cloud Foundry data centers is like creating restore points on a Windows machine. In the event of a crash or the failure of an upgrade process, you can restore your back up settings to fall back to an earlier, functional image. Here’s what you’ll need to do:

  • Export installation settings from the Ops Manager console.
  • Backup critical Pivotal databases: Cloud Controller Database (CCDB), User Account and Authentication (UAA) database, Pivotal MySQL database, and the Apps Manager Database (Console DB in prior to 1.5.x versions).
  • Backup NFS Server data.
  • Identify the target archive location for your backups (ideally, it should be in a centralized location that’s part of an NFS share so that other team members can also access it).
  • Define backup frequency (daily, weekly, bi-weekly, monthly, etc).
  • Define your archival policy.
  • Automate the backup process as much as possible to reduce human error.
  • Define a notification process to inform all affected users and teams of pending process events.
  • Define a restore process (and document it well).

Pre-backup activities

To make sure that your system is ready for the process, there are some important details that will need taking care of in the pre-backup stage:

  • Make sure OpsManager Director is healthy by going to the status tab. It should not display in RED as shown below:

Pivotal Cloud Foundry - Ops Manager

  • Make sure all your current VMs are healthy by checking the status tab in all individual tiles.
  • Ensure that you have enough available space in your NFS share for backing up the files.
  • Ensure there are no pending changes (see the diagram below for examples). If there are any pending changes, confirm whether or not the changes should be applied. Pending changes will not be backed-up and, as we all know, not backing something up is a sure way to guarantee future disasters.

Pivotal Cloud Foundry - Pending

  • Make sure that all of the components of your VMs (like MySQL, CCDB, and NFS Server) are accessible and available before initiating a backup.

Backup Procedure

Backup/Export Installation Settings

  • In the dashboard page:

Pivotal Cloud Foundry - Dashboard

  • Export the installation settings here:

Pivotal Cloud Foundry - Import

  • Save the installations.zip file to your desired location. As the file size can be several GBs, make sure you have enough space in your path.

Backup Cloud Controller Database (CCDB)

Pivotal Cloud Foundry’s Cloud Controller Database maintains a database with records of orgs, spaces, apps, services, service instances, user roles, etc. Backing up this database is critical if you want to protect your existing settings (and you DO want to protect your existing settings).

  • From the Ops Manager Director, copy the IP address of the CCDB. You can find it by clicking on Dashboard-> Ops Manager Director -> Status.
  • Secure Director credentials from the Credentials tab.
  • From a command line, target the BOSH Director using the IP address and credentials that you have recorded.
$bosh target <IP_OF_YOUR_OPS_MANAGER_DIRECTOR>
$bosh login
Your username: director
Enter password:
Logged in as `director'
  • Run the following command (I’m using the path “/pcf-backup”):
$bosh deployments >> /pcf-backup/deployments_09_20_2015.txt
  • Run the following command from within /pcf-backup/poc-backup:
$bosh download manifest DEPLOYMENT-NAME LOCAL-SAVE-NAME
$bosh download manifest cf-1234xyzabcd1234 cf-backup-09_20_2015.yml
  • The deployment name will be taken from the first entry starting with “cf” in the name column from the .txt file.
  • Run bosh deployment <DEPLOYMENT-MANIFEST> to set your deployment.
$bosh deployment cf-backup-09_20_2015.yml
  • Run bosh vms <DEPLOYMENT-NAME> to view all the VMs in your selected deployment.
$bosh vms cf-1234xyzabcd1234
  • Run bosh -d <DEPLOYMENT-MANIFEST> stop <SELECTED-VM> for each Cloud Controller VM.
$bosh -d cf-backup-09_20_2015.yml stop cloud_controller-partition-cdabcd1234b253f40
$bosh -d cf-backup-09_20_2015.yml stop cloud_controller_worker-partition- cdabcd1234b253f40
  • Note the details from the cf-backup-09_20_2015.yml file: 
ccdb:                                                        
address: 1.2.99.16        
port: 2544                    
db_scheme: postgres
  • Select Cloud Controller VM Credentials from  Dashboard -> Elastic Runtime-> Credentials.
   vm Credentials     vcap / xyz1234567989pqr
  • SSH into ccdb with your Cloud Controller Database VM Credentials.
$ssh vcap@<IP_ADDRESS_OF_CCDB>
  • Run the following to find the locally installed psql client on the CCDB VM:
$find /var/vcap | grep 'bin/psql'

Your output should look something like this:

$/var/vcap/data/packages/postgres/b63fe0176a93609bd4ba44751ea490a3ee0f646c.1-9eea4f5b6de7b1d8fff28b94456f61e8e22740ce/bin/psql
  • The next command will ask for the admin credentials, which can be found in the .yml file (cf-backup-09_20_2015.yml):
$/var/vcap/data/packages/postgres/<random-string>/bin/pg_dump -h 1.2.99.16 -U admin -p 2544 ccdb > ccdb_09_20_2015.sql
  • Exit from vcap. Run the following commands from within the backup path on your local workstation:
#scp vcap@1.2.99.16:/home/vcap/ccdb_09_20_2015.sql /pcf-backup

This will complete the CCDB backup process.

Backup your User Account and Authentication Database (UAADB)

  • Get the UAA Database VM credentials from the Elastic Runtime credential page:
vm Credentials     vcap / xxxxxxxxxxxx
Credentials           root / xxxxxxxxxxxxxxxxxx
  • Retrieve your UAADB address from the cf-backup-09_20_2015.yml file and then log in to UAADB:
$ssh vcap@1.2.90.17
  • Run:
$find /var/vcap | grep 'bin/psql'
  • Based on the output of the above “find” operation, run the following:
#/var/vcap/data/packages/postgres/<random-string>/bin/pg_dump -h 1.2.90.17 -U root -p 2544 uaa > uaa_09_20_2015.sql
  • Provide the root password recorded above. Once that’s done, exit from vcap and, from the workstation backup path, copy the files from UAADB VM:
# scp vcap@1.2.90.17:/home/vcap/uaa_09_20_2015.sql  /pcf-backup

This completes the UAADB backup process.

Backup your Console Database

The Console Database is referred to as the Apps Manager Database in Elastic Runtime 1.5.

  • Retrieve your Apps Manager Database credentials from the Elastic Runtime credentials page:
Vm Credentials    vcap / xxxxxxxxxxxxx
Credentials       root / xxxxxxxxxxxxxxxxxxx
  • Get the Apps Manager VM address from cf-backup-09_20_2015.yml file. Login to the Apps Manager Database VM.
$ssh vcap@1.2.90.18
  • Run these two commands (using the output from the find operation as above):
$find /var/vcap | grep 'bin/psql'
$/var/vcap/data/packages/postgres/<random-string>/bin/pg_dump -h 1.2.90.18 -U root -p 2544 console > console_09_20_2015.sql
  • Exit from the DB Console of your VM.
  • Once it is completed, exit from vcap and, from the workstation backup path, copy files from Console DB VM:
$scp vcap@1.2.90.18:/home/vcap/console_09_20_2015.sql  /pcf-backup

This completes the Console Database backup process.

Backup your NFS Server

  • Get the NFS Server VM credentials from Elastic Runtime credentials page.
  • Get the nfs_server address from the from cf-backup-09_20_2015.yml file and login via SSH to the VM:
$ssh vcap@1.2.90.15
  • Execute (this will take some time to complete):
$tar cz shared > nfs_09_20_2015.tar.gz
  • Exit from the NFS Server VM and copy the file to backup path:
$scp vcap@1.2.90.15:/var/vcap/store/nfs_09_20_2015.tar.gz /pcf-backup

This completes the NFS Server backup process.
Backup your MySQL Database:

  • Get the MySQL VM deployment manifest from the deployments_09_20_2015.txt file retrieved earlier and execute:
$bosh download manifest p-mysql-abcd1234f2ad3752 mysql_09_20_2015.yml
  • Create a MySQL dump:
$mysqldump -u root -p -h 1.2.90.20 --all-databases > user_databases_09_20_2015.sql
  • The hostname and password should match those retrieved from manifest file-mysql_09_20_2015.yml

This completes the MySQL DB backup process.

Post Backup Activities

  • Start the cloud-controller and cloud-controller-worker VM.
$bosh -d cf-backup-09_20_2015.yml start cloud_controller-partition-cdabcd1234b253f40
$bosh -d cf-backup-09_20_2015.yml start cloud_controller_worker-partition- cdabcd1234b253f40
  • Confirm that the cloud controller is up and running.

Pivotal Cloud Foundry: the restore process

Restoring a Pivotal Cloud Foundry deployment requires that you reinstall your installation settings restoration and key system databases. Or, in other words, everything we backed up in the previous operations. You’ll need to follow these steps:

  • Import the Installation Settings zip file (i.e., installation.zip) into the PCF Ops Manager > Settings.
  • Once the installation restores process is complete, check the status of all the Jobs in the Elastic Runtime > Status tab.
  • If all the jobs are healthy, then restore the backed up DBs.

We’ll use the UAADB as an example. The rest will follow the same process.

  • Stop UAA:
$ bosh stop <uaa job>
  • (Lookup and then) Drop all the tables in UAADB.
  • Login to the UAADB VM via the vcap user using the password from your runtime’s credential’s page.
$ssh vcap@[uaadb vm ip]
  • Login to the UAADB client on the VM and plsql client:
$/var/vcap/data/packages/ /postgres/<random-string> /bin/psql -U vcap -p 2544 uaa
  • Run these commands to drop the tables:
drop schema public cascade;
create schema public;
  • Use the same password and IP address that was used to back up the UAA database, and restore the UAA database with the following commands:
    $scp uaa.sql vcap@[uaadb vm IP]: #UAADB server
    $/var/vcap/data/packages/postgres/<random-string>/bin/psql -U vcap -p 2544 uaa < uaa.sql
  • Restart UAA Job:
    $bosh start <uaa job>
  • To restore the backed up Pivotal Cloud Foundry NFS Server, simply copy the contents of your backup to /var/vcap/store on the “NFS Server” VM.

A Pivotal Cloud Foundry backup process can be scheduled (and scripted) to create restore points for your installation. You could also use the settings file backups to launch a new installation in a different availability zone or even on a different platform. PCF has provided excellent documentation on both the backup and restore process.

Thoughts? Add your comments below.

The post Pivotal Cloud Foundry: Set Up a Backup Strategy appeared first on Cloud Academy.

]]>
3
Cloud Foundry: Understanding the Core Components https://cloudacademy.com/blog/cloud-foundry-components/ https://cloudacademy.com/blog/cloud-foundry-components/#comments Fri, 30 Oct 2015 11:18:02 +0000 https://cloudacademy.com/blog/?p=9446 Explore the key components you’ll need to build an entire Cloud Foundry architecture. In a recent post, I spoke about some of Cloud Foundry‘s main features. Now I will explore in greater depth the components you’ll need to build an entire Cloud Foundry architecture. This is a good representation of...

The post Cloud Foundry: Understanding the Core Components appeared first on Cloud Academy.

]]>
Explore the key components you’ll need to build an entire Cloud Foundry architecture.

In a recent post, I spoke about some of Cloud Foundry‘s main features. Now I will explore in greater depth the components you’ll need to build an entire Cloud Foundry architecture. This is a good representation of Cloud Foundry’s major moving parts:

Cloud Foundry - architecture

Router

Once an application is deployed in Cloud Foundry, all external system and application traffic is controlled and directed through the router. The router maintains a dynamic route table for all applications deployed in a load balanced environment, so you don’t need to worry about updating routing information to reflect changes to a deployed application or the underlying DEAs (which we’ll discuss soon). You can also configure Router for high availability, defining the number of routers you’ll require to support a load balanced Cloud Foundry environment.

In short, Router is responsible for handling your application load in as efficient a way as possible, by reducing the overhead of maintaining routing tables and complex port configurations.

User Account and Authentication (UAA) and Login server

The UAA and Login server manage Cloud Foundry’s authentication mechanism – a kind of identity management service. UAA is an OAuth2 token-issuing server that allows authentication on behalf of the resource owner (also known as User) between client applications and the Authorization server.

Droplet Execution Engine (DEA)

A DEA is the core of Cloud Foundry functionality. But before going deeper into DEA itself, it’s worth mentioning some of the tools that help DEA achieve its goal.

Buildpacks

Buildpacks are the scripts through which Cloud Foundry identifies the required runtime or framework for the application. Buildpacks are responsible for identifying an application’s related dependencies based on user-provided artifacts. They will then ensure everything is properly downloaded and configured.

So imagine that you want to push a WAR file to a Cloud Foundry environment. Buildpacks are smart enough to identify the programming language, framework, and application container you’ll need to properly deploy, and then automatically download everything for you from GitHub. If your application uses a language or framework that Cloud Foundry buildpacks do not support, you can write custom buildpacks.

Droplet

Droplet is the Cloud Foundry unit of execution. Once an application is pushed to Cloud Foundry and deployed using a buildpack, the result is a droplet. A droplet, therefore, is nothing but an abstraction on top of the application that contains information like metadata. Those droplets are stored in blob storage for further deployment processes.

Warden

Once the droplet is ready, it will need hosting in a suitable environment. In Cloud Foundry, this is called a Warden container. Wardens isolate ephemeral and resource-controlled environments.

Now we can properly define that task of a Droplet Execution Agent.

A DEA will select the appropriate buildpack and use that to both stages your application, and ensure complete life cycle management of its application instance. An application instance consists of a droplet and a Warden container. A DEA will continually broadcast the application instance health status to the health manager, which communicates internally to the cloud controller. Requests are directed to the DEA through the cloud controller.

Cloud Controller

You can think of the cloud controller as the brains of a Cloud Foundry environment, as it manages the entire application life cycle. Hold on: didn’t I just say in the previous section that it’s the DEA that manages the life cycle of an application instance? So what’s with the cloud controller muscling in on its turf?

Let’s try to clear this up. As soon as the user requests an application deployment via the CLI, the first request goes to the cloud controller. It’s then the controller’s responsibility to redirect the request to the correct DEA available in a pool. The cloud controller will then track application metadata by storing droplets in the blob store. Hence the role played by the cloud controller happens long before the DEA gets involved and the controller will, therefore, have a more detailed view of application deployments.

Service Brokers

Almost all applications depend on some external services like a database or third-party components. In traditional development, we bind those components to an application using property files and store it outside the deployable, so that it can be modified whenever required without affecting the running application code. But that’s not possible when you deploy applications in Cloud Foundry, because applications live in a Warden container, which is not persistent.

Cloud Foundry, therefore, uses service brokers, through which developers can provision and bind a particular service to an application. Service brokers can be used to define the relationship between the application and services like databases. This permits loose coupling between an app and a service. Conceptually, this is no different from traditional implementations, except that, instead of having the developer managing a property file, the platform itself provides the place holder for your service instance properties.

HM9000 – Health Manager

Cloud Foundry is supposed to provide a highly available environment for all deployed applications. The health status of those applications is monitored by the health manager (HM9000). Let’s say that you told the cloud controller that you want ten instances of your application running across the available DEA. It is the responsibility of HM9000 to watch for a mismatch between the desired number of instances (ten) and a number that are actually running. If those numbers don’t match, HM9000 will immediately contact the cloud controller to spin up enough new instances to match the desired number. Cloud controller does that by using the stored droplets for that application in blob storage.

NATS

Up to this point, we’ve discussed how cloud controller redirects requests to DEA or how HM9000 instructs cloud controller. But don’t you want to know how these components interact with each other? I think that’s even more important.

The Cloud Foundry architecture is inspired by the distributed architecture concept, which you can clearly see when examining how components interact. They use NATS, a very light distributed queueing and messaging system. If you are familiar with the Message Bus concept, then you shouldn’t find it too difficult to understand the role of NATS in Cloud Foundry’s architecture.

Metrics and Log Aggregators

Last but not least, we need to discuss metrics and logging. Like any PaaS deployment, it’s not ideal to login directly to Cloud Foundry instances. This does make it difficult to access logs while debugging or troubleshooting an application issue. It’s also impossible to monitor your components using normal tools because it’s not advisable to set up agents on PaaS provisioned instances.

Not to worry, however, as Cloud Foundry has already thought of that. The metrics collector provides the metrics for all the components and that need monitoring, and the application log aggregator streams application logs wherever you tell it to.
So, that’s pretty much it for Cloud Foundry’s major components. This diagram should help you understand the complete flow, from application push to application staging.
Cloud Foundry
There are just a couple of things that may not be direct parts of the Cloud Foundry environment, but are worth discussing:

CLI (Command Line Interface) is an interface to deploy and manage your application in the cloud foundry environment. Cloud Foundry’s CLI documentation can be found here.

BOSH is a tool for deploying all the components we’ve discussed above in distributed nodes. BOSH orchestrates the deployment process of a distributed system. Detailed documentation on BOSH can be found here.

I hope these two posts have helped you better understand the design, purpose, and function of Cloud Foundry. Please feel free to leave your feedback in the comments.

The post Cloud Foundry: Understanding the Core Components appeared first on Cloud Academy.

]]>
5