Containers Trend Report. Explore the current state of containers, containerization strategies, and modernizing architecture.
Securing Your Software Supply Chain with JFrog and Azure. Leave with a roadmap for keeping your company and customers safe.
Cloud architecture refers to how technologies and components are built in a cloud environment. A cloud environment comprises a network of servers that are located in various places globally, and each serves a specific purpose. With the growth of cloud computing and cloud-native development, modern development practices are constantly changing to adapt to this rapid evolution. This Zone offers the latest information on cloud architecture, covering topics such as builds and deployments to cloud-native environments, Kubernetes practices, cloud databases, hybrid and multi-cloud environments, cloud computing, and more!
Having not done much infrastructure before, writing Terraform seemed a pretty daunting task. Learning HCL and its nuances in a declarative manner and configuring it all for different environments is a bit of a learning curve. Creating the same in code using an imperative style seems a better path for a developer. Setting Up This is a simple example of using Terraforms cloud development kit (CDKTF) to create a Lambda function in AWS in Typescript. To get started, follow their installation setup here. Create a new project: mkdir cdktf-lambda cd cdktf-lambda cdktf init --template="typescript" --providers="aws@~>4.0" Follow the cmd prompts, and lastly: npm i @cdktf/provider-archive@5.0.1 At the time, the dependencies were: JSON "dependencies": { "@cdktf/provider-archive": "^5.0.1", "@cdktf/provider-aws": "13.0.0", "cdktf": "^0.15.5", "constructs": "^10.1.310" }, "devDependencies": { "@types/jest": "^29.5.1", "@types/node": "^20.1.0", "jest": "^29.5.0", "ts-jest": "^29.1.0", "ts-node": "^10.9.1", "typescript": "^5.0.4" } The directory structure will have been created like so: The init command provides some boilerplate code to get up and running with. The main.ts is the central file from which the code will run, and this is known as a TerraformStack. In this stack is where all the IaC will be placed. Let's Go CDKTF has the concept of providers, which are Terraform wrappers for third-party APIs such as AWS. We need to add one for AWS and one to handle the lambda archive bindings: TypeScript class MyStack extends TerraformStack { private prefix = 'dale-test-'; private region = '<your-aws-region>' private accountId = '<your-aws-account>'; constructor(scope: Construct, id: string) { super(scope, id); new ArchiveProvider(this, "archiveProvider"); new AwsProvider(this, this.prefix + "aws", { region: this.region, allowedAccountIds: [this.accountId], defaultTags: [ { tags: { name: this.prefix +'lambda-stack', version: "1.0", } } ] }); } } There should be enough here to do a sanity check run: cdktf diff At this stage, it error'd and the tsconfig file requires the following to be added:"ignoreDeprecations": "5.0" A successful run should show: cdktf-lambda No changes. Your infrastructure matches the configuration. Roles Next, add an IAM role and a policy for the Lambda: TypeScript const role = new IamRole(this, this.prefix + "iam_for_lambda", { assumeRolePolicy: new DataAwsIamPolicyDocument(this, this.prefix + "assume_role", { statement: [ { actions: [ "sts:AssumeRole" ], effect: "Allow", principals: [ { identifiers: ["lambda.amazonaws.com"], type: "Service", }, ], } ], }).json, name: this.prefix + "iam_for_lambda", }); new IamRolePolicy(this, this.prefix + "iamPolicy", { name: this.prefix + `iamPolicy-state`, role: role.id, policy: new DataAwsIamPolicyDocument(this, this.prefix + "iamPolicyDoc", { version: "2012-10-17", statement: [ { effect: "Allow", actions: ["logs:CreateLogGroup"], resources: [`arn:aws:logs:${this.region}:${this.accountId}:*`] }, { effect: "Allow", actions: [ "logs:CreateLogStream", "logs:PutLogEvents" ], resources: [ `arn:aws:logs:${this.region}:${this.accountId}:log-group:/aws/lambda/dale-test-manual:*` ] } ] }).json }); Lambda We'll create a simple form and place index.html and index.js into the dist folder: HTML <!DOCTYPE html> <html> <head> <meta charset="utf-8"/> <meta data-fr-http-equiv="x-ua-compatible" content="ie=edge"/> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"/> <title>Hello from AWS Lambda!</title> <style type="text/css"> @font-face { font-family: "sans-serif"; } body { margin: 0; font-family: "Amazon Ember", Helvetica, Arial, sans-serif; } h1 { background-color: #232f3e; color: white; font-size: 3rem; font-weight: 300; margin: 0; padding: 1rem; text-align: center; } input[type="text"] { font-family: "Amazon Ember", Helvetica, Arial, sans-serif; flex-grow: 1; border: 1px solid #aab7b8; border-radius: 2px; color: #16191f; } input[type="text"]:focus { border: 1px solid #00a1c9; box-shadow: 0 0 0 1px #00a1c9; outline: 2px dotted transparent; } form { display: flex; flex-direction: row; gap: 1rem; padding-right: 3rem; } input[type="submit"] { background-color: white; border: 1px solid #545b64; border-radius: 2px; color: #545b64; cursor: pointer; font-weight: 700; padding: .4rem 2rem; } input[type="submit"]:hover { background-color: #f2f3f3; border: 1px solid #16191f; color: #16191f; } </style> </head> <body> <h1>CDKTF Lambda Demo</h1> <form action="/" method="GET"> <input name="name" placeholder="name" label="name"> <input name="location" placeholder="location" label="location"> <input type="submit" value="submit"> </form> {formResults} {debug} </body> </html> JavaScript const fs = require('fs'); let html = fs.readFileSync('index.html', {encoding: 'utf8'}); /** * Returns an HTML page containing an interactive Web-based * tutorial. Visit the function URL to see it and learn how * to build with lambda. */ exports.handler = async (event) => { let modifiedHTML = dynamicForm(html, event.queryStringParameters); modifiedHTML = debug(modifiedHTML, event); const response = { statusCode: 200, headers: { 'Content-Type': 'text/html', }, body: modifiedHTML, }; return response; }; function debug(modifiedHTML, event) { return modifiedHTML.replace('{debug}', JSON.stringify(event)); } function dynamicForm(html, queryStringParameters) { let formres = ''; if (queryStringParameters) { Object.values(queryStringParameters).forEach(val => { formres = formres + val + ' '; }); } return html.replace('{formResults}', '<h4>Form Submission: ' + formres + '</h4>'); } Now, set up the lambda: The files are archived from the dist folder, packaged up, and set in the LambdaFunction. A LambdaFunctionUrl is set up so it can be publicly accessed. It is also debugged out to see more details. TypeScript const archiveFile = new DataArchiveFile(this, this.prefix +"lambda", { outputPath: "lambda_function_payload.zip", sourceDir: path.resolve(__dirname, "dist"), type: "zip", }); const lambda = new LambdaFunction(this, this.prefix +"test_lambda", { environment: { variables: { foo: "bar", }, }, filename: "lambda_function_payload.zip", functionName: "dale_test_auto", handler: "index.handler", role: role.arn, runtime: "nodejs16.x", sourceCodeHash: archiveFile.outputBase64Sha256, }); const url = new LambdaFunctionUrl(this, this.prefix +'lambda-url', { functionName: lambda.functionName, authorizationType: 'NONE' }); const debugOutput = new TerraformOutput(this, "lambda-function", { value: url, }); console.log(debugOutput); Thats it! Deploying Now when running cdktf diff we should see it will add four items: Plan: 4 to add, 0 to change, 0 to destroy. # aws_iam_role.dale-test-iam_for_lambda (dale-test-iam_for_lambda) will be created # aws_iam_role_policy.dale-test-iamPolicy (dale-test-iamPolicy) will be created # aws_lambda_function.dale-test-test_lambda (dale-test-test_lambda) will be created # aws_lambda_function.dale-test-test_lambda (dale-test-test_lambda) will be created Now, deploy it. cdktf deploy Plain Text Apply complete! Resources: 4 added, 0 changed, 0 destroyed. Outputs: lambda lambda-function = { "authorization_type" = "NONE" "cors" = tolist([]) "function_arn" = "arn:aws:lambda:eu-west-2:<id>:function:dale_test_auto" "function_name" = "dale_test_auto" "function_url" = "https://<random-url>.lambda-url.eu-west-2.on.aws/" "id" = "dale_test_auto" "invoke_mode" = "BUFFERED" "qualifier" = "" "timeouts" = null /* object */ "url_id" = "<random>" } The function_url now is the URL to view the lambda as shown below: To teardown the Lambda, simply run: Plain Text cdktf destroy Destroy complete! Resources: 4 destroyed. The full code can be found here. Conclusion The code is relatively simple to create infrastructure using CDKTF. It enables a logical structuring of reusable code. Once the infrastructure grows, managing a maintaining the codebase will require less effort. Coming from a developer standpoint, it makes sense to create the using IaC. While being a Java developer, TS was selected due to the fact the TF also writes the core library in TS although it does support other languages too but transpired ultimately back to TS. Not covered here, but the code can be unit tested to ensure everything is wired up correctly, although this doesn't necessarily prevent error downstream when planning and applying. Also not covered here are advanced techniques required when passing resources between stacks and references from newly created resources at run time. All possible using CDKTF, however. That may be the topic for the next post. I hope you enjoyed the post, and thanks for reading.
File handling in AWS is essentially a cloud storage solution that enables corporations to store, manage, and access their data in the cloud. AWS provides a wide range of cloud storage solutions to handle files effectively. Among these, Amazon S3 is the most popular and widely used service for object storage in the cloud. It offers highly scalable, durable, and secure storage for any kind of data, such as images, videos, documents, and backups, at a low cost. Amazon EBS, on the other hand, is a block-level storage service that provides persistent storage volumes for use with Amazon EC2 instances. It is ideal for transactional workloads that require low latency and high throughput. Amazon EFS, a fully managed file system, is designed to be highly available, durable, and scalable. It provides a simple interface to manage file systems and supports multiple EC2 instances simultaneously. Finally, Amazon FSx for Windows File Server provides a fully managed native Microsoft Windows file system that can be accessed over the industry-standard SMB protocol. This service is ideal for customers who want to move their Windows-based applications to the AWS Cloud and need shared file storage. With these AWS file-handling services, firms can store, manage, and access their data efficiently and securely in the cloud. How EBS Can Help With High-Performance Computing Amazon Elastic Block Store (EBS) is a cloud-based block-level storage service that provides highly scalable and persistent storage volumes for use with Amazon EC2 instances. You can think of it as an external hard drive to your computer. You can attach it to a laptop, detach it and attach it to another laptop. However, you cannot attach it to two laptops at the same time. EBS volumes work in a similar fashion. EBS volumes are designed to deliver low-latency performance and high throughput, making them ideal for high-performance computing (HPC) workloads. EBS volumes can be easily attached to EC2 instances, allowing users to create customized HPC environments that meet their specific needs. Additionally, EBS volumes support a variety of data-intensive workloads, such as databases, data warehousing, and big data analytics, making them a versatile choice for enterprises looking to optimize their storage infrastructure. Since EBS is attached to a single ECS instance, it acts as a dedicated file store for that instance, thereby providing low-latency file access. How S3 Becomes the Cornerstone of Your Cloud Storage Needs and Capacities No discussion on cloud storage is complete without mentioning S3. EBS is great as a storage solution when the application accessing it is running on a single server. But when multiple servers need to access the same set of files, EBS will not work. Amazon S3 (Simple Storage Service) is an object storage service that offers industry-leading scalability, durability, and security for organizations of all sizes. With S3, users can store and retrieve any amount of data from anywhere in the world, making it an ideal solution for cloud storage needs. S3 provides a highly available and fault-tolerant architecture that ensures data is always accessible, with a durability of 99.999999999% (11 nines). This means that operations can rely on S3 for critical data storage, backup, and disaster recovery needs. Additionally, S3 supports a variety of use cases, such as data lakes, content distribution, and backup and archiving, making it a versatile choice for enterprises looking to optimize their cloud storage infrastructure. To connect to an S3 bucket, EC2 instances running within a VPC can either use a VPC endpoint or connect over the internet. Connecting through a VPC endpoint will provide much lower latency. When an application requires file interchange with a third-party application, S3 is one of the best choices. Files from external applications can be put into an S3 bucket using the SFTP protocol; S3 Transfer Family provides support for this. Applications running in EC2 instances can then access these files directly. Unlock the Power of EFS for Enterprise-Grade Scalable File Storage S3 can be accessed by many applications, even external ones running outside AWS. However, if the requirement is to store files to be accessed by multiple EC2 instances within a VPC, EFS provides a scalable low-latency solution. This is similar to accessing shared files within a corporate intranet. Amazon Elastic File System (EFS) is a fully managed file storage service designed to provide enterprise-grade scalable file storage for use with Amazon EC2 instances. EFS volumes are highly available, durable, and scalable, making them ideal for companies that require high-performance file storage solutions. With EFS, users can create file systems that can scale up or down automatically in response to changes in demand, without impacting performance or availability. This means that organizations can easily manage their file storage needs, without having to worry about capacity planning or performance issues. Additionally, EFS supports a variety of file-based workloads, such as content management, web serving, and home directories, making it a versatile choice for firms of all sizes. After setting up EFS within a VPC, mount targets need to be created. EFS supports one mount target per availability zone within a VPC. Once the mount targets are created the file system can be mounted onto compute resources like EC2, ECS and Lambda. Revolutionize Your Data Access and Analysis With FSx for Lustre and Windows File System Amazon FSx for Lustre and Windows file system is a fully managed file storage service that helps organizations revolutionize their data access and analysis capabilities. FSx for Lustre is a high-performance file system designed for compute-intensive workloads, such as machine learning, high-performance computing, and video processing. It delivers sub-millisecond latencies and high throughput, making it ideal for applications that require fast access to large datasets. FSx for Windows file system, on the other hand, provides fully managed Windows file shares backed by Windows Server and the Server Message Block (SMB) protocol. It supports a wide range of use cases, such as home directories, application data, and media editing, and provides built-in features for data protection and disaster recovery. Selecting a File-Handling Service When deciding whether to use Amazon EFS or Amazon FSx for Lustre and Windows file systems, it's important to consider the specific needs of your organization. EFS is a good choice for general-purpose file storage workloads that require high availability and scalability, such as content management, web serving, and home directories. On the other hand, FSx for Lustre is designed for compute-intensive workloads that require high-performance file storage, such as machine learning and high-performance computing. Furthermore, FSx for Windows file system is ideal for firms that require fully managed Windows file shares backed by Windows Server and the SMB protocol. Ultimately, the choice between EFS and FSx will depend on the specific requirements of your workload and the level of performance, scalability, and manageability that your application requires.
What Is Terraform? Terraform is an open-source “Infrastructure as Code” tool created by HashiCorp. A declarative coding tool, Terraform enables developers to use a high-level configuration language called HCL (HashiCorp Configuration Language) to describe the desired “end-state” cloud or on-premises infrastructure for running an application. It then generates a plan for reaching that end-state and executes the plan to provision the infrastructure. Because Terraform uses a simple syntax, can provision infrastructure across multiple clouds and on-premises data centers, and can safely and efficiently re-provision infrastructure in response to configuration changes, it is currently one of the most popular infrastructure automation tools available. If your organization plans to deploy a hybrid cloud or multi-cloud environment, you’ll likely want or need to get to know Terraform. Why Infrastructure as Code (IaC)? To better understand the advantages of Terraform, it helps to first understand the benefits of Infrastructure as Code (IaC). IaC allows developers to codify infrastructure in a way that makes provisioning automated, faster, and repeatable. It’s a key component of Agile and DevOps practices such as version control, continuous integration, and continuous deployment. Infrastructure as code can help with the following: Improve speed: Automation is faster than manually navigating an interface when you need to deploy and/or connect resources. Improve reliability: If your infrastructure is large, it becomes easy to misconfigure a resource or provision services in the wrong order. With IaC, the resources are always provisioned and configured exactly as declared. Prevent configuration drift: Configuration drift occurs when the configuration that provisioned your environment no longer matches the actual environment. (See ‘Immutable infrastructure’ below.) Support experimentation, testing, and optimization: Because Infrastructure as Code makes provisioning new infrastructure so much faster and easier, you can make and test experimental changes without investing lots of time and resources, and if you like the results, you can quickly scale up the new infrastructure for production. Why Terraform? There are a few key reasons developers choose to use Terraform over other Infrastructure as Code tools: Open source: Terraform is backed by large communities of contributors who build plugins to the platform. Regardless of which cloud provider you use, it’s easy to find plugins, extensions, and professional support. This also means Terraform evolves quickly, with new benefits and improvements added consistently. Platform agnostic: Meaning you can use it with any cloud services provider. Most other IaC tools are designed to work with a single cloud provider. Immutable infrastructure: Most Infrastructure as Code tools create mutable infrastructure, meaning the infrastructure can change to accommodate changes such as a middleware upgrade or new storage server. The danger with mutable infrastructure is configuration drift — as the changes pile up, the actual provisioning of different servers or other infrastructure elements ‘drifts’ further from the original configuration, making bugs or performance issues difficult to diagnose and correct. Terraform provisions immutable infrastructure, which means that with each change to the environment, the current configuration is replaced with a new one that accounts for the change, and the infrastructure is reprovisioned. Even better, previous configurations can be retained as versions to enable rollbacks if necessary or desired. Terraform Modules Terraform modules are small, reusable Terraform configurations for multiple infrastructure resources that are used together. Terraform modules are useful because they allow complex resources to be automated with reusable, configurable constructs. Writing even a very simple Terraform file results in a module. A module can call other modules — called child modules — which can make assembling configuration faster and more concise. Modules can also be called multiple times, either within the same configuration or in separate configurations. Terraform Providers Terraform providers are plugins that implement resource types. Providers contain all the code needed to authenticate and connect to a service — typically from a public cloud provider — on behalf of the user. You can find providers for the cloud platforms and services you use, add them to your configuration, and then use their resources to provision infrastructure. Providers are available for nearly every major cloud provider, SaaS offering, and more, developed and/or supported by the Terraform community or individual organizations. Refer to the Terraform documentation for a detailed list. Terraform vs. Kubernetes Sometimes, there is confusion between Terraform and Kubernetes and what they actually do. The truth is that they are not alternatives and actually work effectively together. Kubernetes is an open-source container orchestration system that lets developers schedule deployments onto nodes in a compute cluster and actively manages containerized workloads to ensure that their state matches the users’ intentions. Terraform, on the other hand, is an Infrastructure as Code tool with a much broader reach, letting developers automate complete infrastructure that spans multiple public clouds and private clouds. Terraform can automate and manage Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), or even Software-as-a-Service (SaaS) level capabilities and build all these resources across all those providers in parallel. You can use Terraform to automate the provisioning of Kubernetes — particularly managed Kubernetes clusters on cloud platforms — and to automate the deployment of applications into a cluster. Terraform vs. Ansible Terraform and Ansible are both Infrastructure as Code tools, but there are a couple of significant differences between the two: While Terraform is purely a declarative tool (see above), Ansible combines both declarative and procedural configurations. In the procedural configuration, you specify the steps, or the precise manner, in which you want to provision infrastructure to the desired state. Procedural configuration is more work, but it provides more control. Terraform is open source; Ansible is developed and sold by Red Hat. IBM and Terraform IBM Cloud Schematics is IBM’s free cloud automation tool based on Terraform. IBM Cloud Schematics allows you to fully manage your Terraform-based infrastructure automation so you can spend more time building applications and less time building environments.
Kubernetes is an open-source container orchestration platform that has revolutionized the way applications are deployed and managed. With Kubernetes, developers can easily deploy and manage containerized applications at scale and in a consistent and predictable manner. However, managing Kubernetes environments can be challenging, and security risks are always a concern. Therefore, it's important to have the right auditing tools in place to ensure that the Kubernetes environment is secure, compliant, and free of vulnerabilities. In this article, we will discuss some of the top auditing tools that can be used to help secure Kubernetes and ensure compliance with best practices. 1. Kubernetes Audit Kubernetes Audit is a native Kubernetes tool that provides an audit log of all changes made to the Kubernetes API server. In addition, it captures events related to requests made to the Kubernetes API server and the responses generated by the server. This audit information can be used to troubleshoot issues and verify compliance with best practices. Kubernetes Audit can be enabled by adding a flag to the Kubernetes API server configuration file. Once enabled, Kubernetes Audit can capture a wide range of events, such as the creation and deletion of pods, services, and deployments, and changes to service accounts and role bindings. The audit log can be stored in various locations, including log files on the node, a container, or Syslog. By using Kubernetes Audit, administrators can quickly determine if there is any unauthorized access or activities within the Kubernetes environment. It also provides an auditable record of all changes made to the environment, making it easier to identify any issues that may arise. 2. Kube-bench Kube-bench is an open-source tool that is designed to check Kubernetes clusters against the Kubernetes Benchmarks - a collection of security configuration best practices developed by the Center for Internet Security (CIS). Kube-bench can be used to identify any misconfigurations or risks that may exist within the Kubernetes environment and ensure compliance with CIS Kubernetes Benchmark. Kube-bench checks Kubernetes clusters against the 120 available CIS Kubernetes Benchmark checks and produces a report of non-compliant configurations. It can be run manually on a one-time basis or in a continuous integration pipeline that can help ensure that new applications or changes do not affect the cluster's compliance. Kube-bench is capable of testing various aspects of Kubernetes security, including API server, etcd, nodes, pods, network policies, and others. Kube-bench provides detailed instructions on how to resolve each failed to check through remediation steps, making it easy for administrators to address any issues found during the audit process. Overall, kube-bench makes it easier for administrators to achieve a highly secure Kubernetes environment by providing an automated way of checking Kubernetes against the CIS Benchmarks. 3. Kube-hunter Kube-hunter is another open-source tool designed to identify Kubernetes security vulnerabilities by scanning a Kubernetes cluster for weaknesses. The tool uses a range of techniques to identify potential issues, including port scanning, service discovery, and scanning for known vulnerabilities. Kube-hunter can be used to perform various security checks on Kubernetes clusters, including checks for RBAC misconfigurations, exposed Kubernetes dashboards, and other security issues that could lead to unauthorized access. The tool is designed to be easy to use and requires no configuration - simply run kube-hunter from the command line and let it do its job. One unique feature of the kube-hunter is that it can be run as either an offensive or defensive tool. Offensive mode attempts to actively penetrate the Kubernetes cluster to identify vulnerabilities, while defensive mode simulates an attack by scanning for known vulnerabilities and misconfigurations. Both modes are great for identifying security vulnerabilities in a Kubernetes environment and improving overall security posture. Overall, kube-hunter is a powerful tool for identifying security risks in Kubernetes clusters and can be an essential part of any Kubernetes security strategy. The tool is actively developed by Aqua Security and has a large and active community backing it up. 4. Polaris Polaris is a free, open-source tool developed by Fairwinds that performs automated configuration validation for Kubernetes clusters. Polaris can be used to assess cluster compliance with Kubernetes configuration management best practices and ensures Kubernetes resources conform to defined policies. Polaris can detect and alert on various issues that might occur in a Kubernetes cluster, including inappropriate resource requests, non-compliant Pod security policies, misconfigured access control lists, and other common misconfigurations. One of Polaris' most valuable features is its integration with Prometheus Alert Manager, which automatically scans Kubernetes configurations and generates alerts when any of the predefined policies are violated. The tool can also be used to generate custom policies that meet specific cluster and workload requirements. Overall, Polaris is an essential tool for Kubernetes cluster configuration management and is well-suited to companies that require a more proactive approach to security. The automation of the tool significantly reduces the time it takes to perform cluster configurations and policy evaluations, ensuring that Kubernetes resources are continuously provisioned correctly and compliant with established policies. Conclusion Kubernetes provides a powerful platform for deploying and managing containerized applications, but it needs to be secured to protect sensitive data, prevent security breaches, and ensure compliance with industry regulations. Utilizing the right auditing tools is essential for maintaining the security and compliance of Kubernetes environments, detecting vulnerabilities, and verifying configurations. There are several Kubernetes auditing tools available, from native Kubernetes Audit to open-source tools like Kube-bench, kube-hunter, and Polaris. Each tool has its own unique features and capabilities, and finding the right one depends on your specific needs. By implementing and regularly using an auditing tool or combination of tools, organizations can minimize the risk of security breaches, mitigate vulnerabilities, and ensure compliance with regulatory requirements.
In many places, you can read that Podman is a drop-in replacement for Docker. But is it as easy as it sounds? In this blog, you will start with a production-ready Dockerfile and execute the Podman commands just like you would do when using Docker. Let’s investigate whether this works without any problems! Introduction Podman is a container engine, just as Docker is. Podman, however, is a daemonless container engine, and it runs containers by default as rootless containers. This is more secure than running containers as root. The Docker daemon can also run as a non-root user nowadays. Podman advertises on its website that Podman is a drop-in replacement for Docker. Just add alias docker=podman , and you will be fine. Let’s investigate whether it is that simple. In the remainder of this blog, you will try to build a production-ready Dockerfile for running a Spring Boot application. You will run it as a single container, and you will try to run two containers and have some inter-container communication. In the end, you will verify how volumes can be mounted. One of the prerequisites for this blog is using a Linux operating system. Podman is not available for Windows. The sources used in this blog can be found at GitHub. The Dockerfile you will be using runs a Spring Boot application. It is a basic Spring Boot application containing one controller which returns a hello message. Build the jar: Shell $ mvn clean verify Run the jar: Shell $ java -jar target/mypodmanplanet-0.0.1-SNAPSHOT.jar Check the endpoint: Shell $ curl http://localhost:8080/hello Hello Podman! The Dockerfile is based on a previous blog about Docker best practices. The file 1-Dockerfile-starter can be found in the Dockerfiles directory. Dockerfile FROM eclipse-temurin:17.0.6_10-jre-alpine@sha256:c26a727c4883eb73d32351be8bacb3e70f390c2c94f078dc493495ed93c60c2f AS builder WORKDIR application ARG JAR_FILE COPY target/${JAR_FILE} app.jar RUN java -Djarmode=layertools -jar app.jar extract FROM eclipse-temurin:17.0.6_10-jre-alpine@sha256:c26a727c4883eb73d32351be8bacb3e70f390c2c94f078dc493495ed93c60c2f WORKDIR /opt/app RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser COPY --from=builder application/dependencies/ ./ COPY --from=builder application/spring-boot-loader/ ./ COPY --from=builder application/snapshot-dependencies/ ./ COPY --from=builder application/application/ ./ RUN chown -R javauser:javauser . USER javauser ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"] Prerequisites Prerequisites for this blog are: Basic Linux knowledge, Ubuntu 22.04 is used during this post; Basic Java and Spring Boot knowledge; Basic Docker knowledge; Installation Installing Podman is quite easy. Just run the following command. Shell $ sudo apt-get install podman Verify the correct installation. Shell $ podman --version podman version 3.4.4 \You can also install podman-docker, which will create an alias when you use docker in your commands. It is advised to wait for the conclusion of this post before you install this one. Build Dockerfile The first thing to do is to build the container image. Execute from the root of the repository the following command. Shell $ podman build . --tag mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT -f Dockerfiles/1-Dockerfile-starter --build-arg JAR_FILE=mypodmanplanet-0.0.1-SNAPSHOT.jar [1/2] STEP 1/5: FROM eclipse-temurin:17.0.6_10-jre-alpine@sha256:c26a727c4883eb73d32351be8bacb3e70f390c2c94f078dc493495ed93c60c2f AS builder [2/2] STEP 1/10: FROM eclipse-temurin:17.0.6_10-jre-alpine@sha256:c26a727c4883eb73d32351be8bacb3e70f390c2c94f078dc493495ed93c60c2f Error: error creating build container: short-name "eclipse-temurin@sha256:c26a727c4883eb73d32351be8bacb3e70f390c2c94f078dc493495ed93c60c2f" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf" This returns an error while retrieving the base image. The error message refers to /etc/containers/registries.conf. The following is stated in this file. Plain Text # For more information on this configuration file, see containers-registries.conf(5). # # NOTE: RISK OF USING UNQUALIFIED IMAGE NAMES # We recommend always using fully qualified image names including the registry # server (full dns name), namespace, image name, and tag # (e.g., registry.redhat.io/ubi8/ubi:latest). Pulling by digest (i.e., # quay.io/repository/name@digest) further eliminates the ambiguity of tags. # When using short names, there is always an inherent risk that the image being # pulled could be spoofed. For example, a user wants to pull an image named # `foobar` from a registry and expects it to come from myregistry.com. If # myregistry.com is not first in the search list, an attacker could place a # different `foobar` image at a registry earlier in the search list. The user # would accidentally pull and run the attacker's image and code rather than the # intended content. We recommend only adding registries which are completely # trusted (i.e., registries which don't allow unknown or anonymous users to # create accounts with arbitrary names). This will prevent an image from being # spoofed, squatted or otherwise made insecure. If it is necessary to use one # of these registries, it should be added at the end of the list. To conclude, it is suggested to use a fully qualified image name. This means that you need to change the lines containing: Dockerfile eclipse-temurin:17.0.6_10-jre-alpine@sha256:c26a727c4883eb73d32351be8bacb3e70f390c2c94f078dc493495ed93c60c2f Into: Dockerfile docker.io/eclipse-temurin:17.0.6_10-jre-alpine@sha256:c26a727c4883eb73d32351be8bacb3e70f390c2c94f078dc493495ed93c60c2f You just add docker.io/ to the image name. A minor change, but already one difference compared to Docker. The image name is fixed in file 2-Dockerfile-fix-shortname, so let’s try building the image again. Shell $ podman build . --tag mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT -f Dockerfiles/2-Dockerfile-fix-shortname --build-arg JAR_FILE=mypodmanplanet-0.0.1-SNAPSHOT.jar [1/2] STEP 1/5: FROM docker.io/eclipse-temurin:17.0.6_10-jre-alpine@sha256:c26a727c4883eb73d32351be8bacb3e70f390c2c94f078dc493495ed93c60c2f AS builder Trying to pull docker.io/library/eclipse-temurin@sha256:c26a727c4883eb73d32351be8bacb3e70f390c2c94f078dc493495ed93c60c2f... Getting image source signatures Copying blob 72ac8a0a29d6 done Copying blob f56be85fc22e done Copying blob f8ed194273be done Copying blob e5daea9ee890 done [2/2] STEP 1/10: FROM docker.io/eclipse-temurin:17.0.6_10-jre-alpine@sha256:c26a727c4883eb73d32351be8bacb3e70f390c2c94f078dc493495ed93c60c2f Error: error creating build container: writing blob: adding layer with blob "sha256:f56be85fc22e46face30e2c3de3f7fe7c15f8fd7c4e5add29d7f64b87abdaa09": Error processing tar file(exit status 1): potentially insufficient UIDs or GIDs available in user namespace (requested 0:42 for /etc/shadow): Check /etc/subuid and /etc/subgid: lchown /etc/shadow: invalid argument Now there is an error about potentially insufficient UIDs or GIDs available in the user namespace. More information about this error can be found here. It is very well explained in that post, and it is too much to repeat all of this in this post. The summary is that the image which is trying to be pulled, has files owned by UIDs over 65.536. Due to that issue, the image would not fit into rootless Podman’s default UID mapping, which limits the number of UIDs and GIDs available. So, how to solve this? First, check the contents of /etc/subuid and /etc/subgid. In my case, the following is the output. For you, it will probably be different. Shell $ cat /etc/subuid admin:100000:65536 $ cat /etc/subgid admin:100000:65536 The admin user listed in the output has 100.000 as the first UID or GID available, and it has a size of 65.536. The format is user:start:size. This means that the admin user has access to UIDs or GIDs 100.000 up to and including 165.535. My current user is not listed here, and that means that my user can only allocate 1 UID en 1 GID for the container. That 1 UID/GID is already taken for the root user in the container. If a container image needs an extra user, there will be a problem, as you can see above. This can be solved by adding UIDs en GIDs for your user. Let’s add values 200.000 up to and including 265.535 to your user. Shell $ sudo usermod --add-subuids 200000-265535 --add-subgids 200000-265535 <replace with your user> Verify the contents of both files again. The user is added to both files. Shell $ cat /etc/subgid admin:100000:65536 <your user>:200000:65536 $ cat /etc/subuid admin:100000:65536 <your user>:200000:65536 Secondly, you need to run the following command. Shell $ podman system migrate Try to build the image again, and now it works. Shell $ podman build . --tag mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT -f Dockerfiles/2-Dockerfile-fix-shortname --build-arg JAR_FILE=mypodmanplanet-0.0.1-SNAPSHOT.jar [1/2] STEP 1/5: FROM docker.io/eclipse-temurin:17.0.6_10-jre-alpine@sha256:c26a727c4883eb73d32351be8bacb3e70f390c2c94f078dc493495ed93c60c2f AS builder Trying to pull docker.io/library/eclipse-temurin@sha256:c26a727c4883eb73d32351be8bacb3e70f390c2c94f078dc493495ed93c60c2f... Getting image source signatures Copying blob f56be85fc22e done Copying blob f8ed194273be done Copying blob 72ac8a0a29d6 done Copying blob e5daea9ee890 done Copying config c74d412c3d done Writing manifest to image destination Storing signatures [1/2] STEP 2/5: WORKDIR application --> d4f0e970dc1 [1/2] STEP 3/5: ARG JAR_FILE --> ca97dcd6f2a [1/2] STEP 4/5: COPY target/${JAR_FILE} app.jar --> 58d88cfa511 [1/2] STEP 5/5: RUN java -Djarmode=layertools -jar app.jar extract --> 348cae813a4 [2/2] STEP 1/10: FROM docker.io/eclipse-temurin:17.0.6_10-jre-alpine@sha256:c26a727c4883eb73d32351be8bacb3e70f390c2c94f078dc493495ed93c60c2f [2/2] STEP 2/10: WORKDIR /opt/app --> 4118cdf90b5 [2/2] STEP 3/10: RUN addgroup --system javauser && adduser -S -s /usr/sbin/nologin -G javauser javauser --> cd11f346381 [2/2] STEP 4/10: COPY --from=builder application/dependencies/ ./ --> 829bffcb6c7 [2/2] STEP 5/10: COPY --from=builder application/spring-boot-loader/ ./ --> 2a93f97d424 [2/2] STEP 6/10: COPY --from=builder application/snapshot-dependencies/ ./ --> 3e292cb0456 [2/2] STEP 7/10: COPY --from=builder application/application/ ./ --> 5dd231c5b51 [2/2] STEP 8/10: RUN chown -R javauser:javauser . --> 4d736e8c3bb [2/2] STEP 9/10: USER javauser --> d7a96ca6f36 [2/2] STEP 10/10: ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"] [2/2] COMMIT mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT --> 567fd123071 Successfully tagged localhost/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT 567fd1230713f151950de7151da82a19d34f80af0384916b13bf49ed72fd2fa1 Verify the list of images with Podman just like you would do with Docker: Shell $ podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/mydeveloperplanet/mypodmanplanet 0.0.1-SNAPSHOT 567fd1230713 2 minutes ago 209 MB Is Podman a drop-in replacement for Docker for building a Dockerfile? No, it is not a drop-in replacement because you needed to use the fully qualified image name for the base image in the Dockerfile, and you needed to make changes to the user namespace in order to be able to pull the image. Besides these two changes, building the container image just worked. Start Container Now that you have built the image, it is time to start a container. Shell $ podman run --name mypodmanplanet -d localhost/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT The container has started successfully. Shell $ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 27639dabb573 localhost/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT 18 seconds ago Up 18 seconds ago mypodmanplanet You can also inspect the container logs. Shell $ podman logs mypodmanplanet . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v3.0.5) 2023-04-22T14:38:05.896Z INFO 1 --- [ main] c.m.m.MyPodmanPlanetApplication : Starting MyPodmanPlanetApplication v0.0.1-SNAPSHOT using Java 17.0.6 with PID 1 (/opt/app/BOOT-INF/classes started by javauser in /opt/app) 2023-04-22T14:38:05.898Z INFO 1 --- [ main] c.m.m.MyPodmanPlanetApplication : No active profile set, falling back to 1 default profile: "default" 2023-04-22T14:38:06.803Z INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2023-04-22T14:38:06.815Z INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2023-04-22T14:38:06.816Z INFO 1 --- [ main] o.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/10.1.7] 2023-04-22T14:38:06.907Z INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2023-04-22T14:38:06.910Z INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 968 ms 2023-04-22T14:38:07.279Z INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' 2023-04-22T14:38:07.293Z INFO 1 --- [ main] c.m.m.MyPodmanPlanetApplication : Started MyPodmanPlanetApplication in 1.689 seconds (process running for 1.911) Verify whether the endpoint can be accessed. Shell $ curl http://localhost:8080/hello curl: (7) Failed to connect to localhost port 8080 after 0 ms: Connection refused That’s not the case. With Docker, you can inspect the container to see which IP address is allocated to the container. Shell $ podman inspect mypodmanplanet | grep IPAddress "IPAddress": "", It seems that the container does not have a specific IP address. The endpoint is also not accessible at localhost. The solution is to add a port mapping when creating the container. Stop the container and remove it. Shell $ podman stop mypodmanplanet mypodmanplanet $ podman rm mypodmanplanet 27639dabb5730d3244d205200a409dbc3a1f350196ba238e762438a4b318ef73 Start the container again, but this time with a port mapping of internal port 8080 to an external port 8080. Shell $ podman run -p 8080:8080 --name mypodmanplanet -d localhost/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT Verify again whether the endpoint can be accessed. This time it works. Shell $ curl http://localhost:8080/hello Hello Podman! Stop and remove the container before continuing this blog. Is Podman a drop-in replacement for Docker for running a container image? No, it is not a drop-in replacement. Although it was possible to use exactly the same commands as with Docker, you needed to explicitly add a port mapping. Without the port mapping, it was not possible to access the endpoint. Volume Mounts Volume mounts and access to directories and files outside the container and inside a container often lead to Permission Denied errors. In a previous blog, this behavior is extensively described for the Docker engine. It is interesting to see how this works when using Podman. You will map an application.properties file in the container next to the jar file. The Spring Boot application will pick up this application.properties file. The file configures the server port to port 8082, and the file is located in the directory properties in the root of the repository. Properties files server.port=8082 Run the container with a port mapping from internal port 8082 to external port 8083 and mount the application.properties file into the container directory /opt/app where also the jar file is located. The volume mount has the property ro in order to indicate that it is a read-only file. Shell $ podman run -p 8083:8082 --volume ./properties/application.properties:/opt/app/application.properties:ro --name mypodmanplanet localhost/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT Verify whether the endpoint can be accessed and whether it works. Shell $ curl http://localhost:8083/hello Hello Podman! Open a shell in the container and list the directory contents in order to view the ownership of the file. Shell $ podman exec -it mypodmanplanet sh /opt/app $ ls -la total 24 drwxr-xr-x 1 javauser javauser 4096 Apr 15 10:33 . drwxr-xr-x 1 root root 4096 Apr 9 12:57 .. drwxr-xr-x 1 javauser javauser 4096 Apr 9 12:57 BOOT-INF drwxr-xr-x 1 javauser javauser 4096 Apr 9 12:57 META-INF -rw-r--r-- 1 root root 16 Apr 15 10:24 application.properties drwxr-xr-x 1 javauser javauser 4096 Apr 9 12:57 org With Docker, the file would have been owned by your local system user, but with Podman, the file is owned by root. Let’s check the permissions of the file on the local system. Shell $ ls -la total 12 drwxr-xr-x 2 <myuser> domain users 4096 apr 15 12:24 . drwxr-xr-x 8 <myuser> domain users 4096 apr 15 12:24 .. -rw-r--r-- 1 <myuser> domain users 16 apr 15 12:24 application.properties As you can see, the file on the local system is owned by <myuser>. This means that your host user, who is running the container, is seen as a user root inside of the container. Open a shell in the container and try to change the contents of the file application.properties. You will notice that this is not allowed because you are a user javauser. Shell $ podman exec -it mypodmanplanet sh /opt/app $ vi application.properties /opt/app $ whoami javauser Stop and remove the container. Run the container, but this time with property U instead of ro. The U suffix tells Podman to use the correct host UID and GID based on the UID and GID within the container to change the owner and group of the source volume recursively. Shell $ podman run -p 8083:8082 --volume ./properties/application.properties:/opt/app/application.properties:U --name mypodmanplanet localhost/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT Open a shell in the container, and now the user javauser is the owner of the file. Shell $ podman exec -it mypodmanplanet sh /opt/app $ ls -la total 24 drwxr-xr-x 1 javauser javauser 4096 Apr 15 10:41 . drwxr-xr-x 1 root root 4096 Apr 9 12:57 .. drwxr-xr-x 1 javauser javauser 4096 Apr 9 12:57 BOOT-INF drwxr-xr-x 1 javauser javauser 4096 Apr 9 12:57 META-INF -rw-r--r-- 1 javauser javauser 16 Apr 15 10:24 application.properties drwxr-xr-x 1 javauser javauser 4096 Apr 9 12:57 org On the local system, a different UID and GID than my local user have taken ownership. Shell $ ls -la properties/ total 12 drwxr-xr-x 2 <myuser> domain users 4096 apr 15 12:24 . drwxr-xr-x 8 <myuser> domain users 4096 apr 15 12:24 .. -rw-r--r-- 1 200099 200100 16 apr 15 12:24 application.properties This time, changing the file on the local system is not allowed, but it is allowed inside the container for user javauser. Is Podman a drop-in replacement for Docker for mounting volumes inside a container? No, it is not a drop-in replacement. The file permissions function is a bit different than with the Docker engine. You need to know the differences in order to be able to mount files and directories inside containers. Pod Podman knows the concept of a Pod, just like a Pod in Kubernetes. A Pod allows you to group containers. A Pod also has a shared network namespace, and this means that containers inside a Pod can connect to each other. More information about container networking can be found here. This means that Pods are the first choice for grouping containers. When using Docker, you will use Docker Compose for this. There exists something like Podman Compose, but this deserves a blog in itself. Let’s see how this works. You will set up a Pod running two containers with the Spring Boot application. First, you need to create a Pod. You also need to expose the ports you want to be accessible outside of the Pod. This can be done with the -p argument. And you give the Pod a name, hello-pod in this case. Shell $ podman pod create -p 8080-8081:8080-8081 --name hello-pod When you list the Pod, you notice that it already contains one container. This is the infra container. This infra container holds the namespace in order that containers can connect to each other, and it enables starting and stopping containers in the Pod. The infra container is based on the k8s.gcr.io/pause image. Shell $ podman pod ps POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS dab9029ad0c5 hello-pod Created 3 seconds ago aac3420b3672 1 $ podman ps --all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES aac3420b3672 k8s.gcr.io/pause:3.5 4 minutes ago Created 0.0.0.0:8080-8081->8080-8081/tcp dab9029ad0c5-infra Create a container mypodmanplanet-1 and add it to the Pod. By means of the --env argument, you change the port of the Spring Boot application to port 8081. Shell $ podman create --pod hello-pod --name mypodmanplanet-1 --env 'SERVER_PORT=8081' localhost/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT env Start the Pod. Shell $ podman pod start hello-pod Verify whether the endpoint can be reached at port 8081 and verify that the endpoint at port 8080 cannot be reached. Shell $ curl http://localhost:8081/hello Hello Podman! $ curl http://localhost:8080/hello curl: (56) Recv failure: Connection reset by peer Add a second container mypodmanplanet-2 to the Pod, this time running at the default port 8080. Shell $ podman create --pod hello-pod --name mypodmanplanet-2 localhost/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT Verify the Pod status. It says that the status is Degraded. Shell $ podman pod ps POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS dab9029ad0c5 hello-pod Degraded 9 minutes ago aac3420b3672 3 Take a look at the containers. Two containers are running, and a new container is just created. That is the reason the Pod has the status Degraded. Shell $ podman ps --all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES aac3420b3672 k8s.gcr.io/pause:3.5 11 minutes ago Up 2 minutes ago 0.0.0.0:8080-8081->8080-8081/tcp dab9029ad0c5-infra 321a62fbb4fc localhost/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT env 3 minutes ago Up 2 minutes ago 0.0.0.0:8080-8081->8080-8081/tcp mypodmanplanet-1 7b95fb521544 localhost/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT About a minute ago Created 0.0.0.0:8080-8081->8080-8081/tcp mypodmanplanet-2 Start the second container and verify the Pod status. The status is now Running. Shell $ podman start mypodmanplanet-2 $ podman pod ps POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS dab9029ad0c5 hello-pod Running 12 minutes ago aac3420b3672 3 Both endpoints can now be reached. Shell $ curl http://localhost:8080/hello Hello Podman! $ curl http://localhost:8081/hello Hello Podman! Verify whether you can access the endpoint of container mypodmanplanet-1 from within mypodmanplanet-2. This also works. Shell $ podman exec -it mypodmanplanet-2 sh /opt/app $ wget http://localhost:8081/hello Connecting to localhost:8081 (127.0.0.1:8081) saving to 'hello' hello 100% |***********************************************************************************************************************************| 13 0:00:00 ETA 'hello' saved Cleanup To conclude, you can do some cleanup. Stop the running Pod. Shell $ podman pod stop hello-pod The Pod has the status Exited now. Shell $ podman pod ps POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS dab9029ad0c5 hello-pod Exited 55 minutes ago aac3420b3672 3 All containers in the Pod are also exited. Shell $ podman ps --all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES aac3420b3672 k8s.gcr.io/pause:3.5 56 minutes ago Exited (0) About a minute ago 0.0.0.0:8080-8081->8080-8081/tcp dab9029ad0c5-infra 321a62fbb4fc localhost/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT env 48 minutes ago Exited (143) About a minute ago 0.0.0.0:8080-8081->8080-8081/tcp mypodmanplanet-1 7b95fb521544 localhost/mydeveloperplanet/mypodmanplanet:0.0.1-SNAPSHOT 46 minutes ago Exited (143) About a minute ago 0.0.0.0:8080-8081->8080-8081/tcp mypodmanplanet-2 Remove the Pod. Shell $ podman pod rm hello-pod The Pod and the containers are removed. Shell $ podman pod ps POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS $ podman ps --all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Conclusion The bold statement that Podman is a drop-in replacement for Docker is not true. Podman differs from Docker on certain topics like building container images, starting containers, networking, volume mounts, inter-container communication, etc. However, Podman does support many Docker commands. The statement should be Podman is an alternative to Docker. This is certainly true. It is important for you to know and understand the differences before switching to Podman. After this, it is definitely a good alternative.
In the rapidly evolving landscape of technology, the role of cloud computing is more significant than ever. This revolutionary paradigm continues to reshape the way businesses operate, fostering an environment ripe for unprecedented innovation. In this in-depth exploration, we take a journey into the future of cloud computing, discussing emerging trends such as autonomous and distributed cloud, generative AI tools, multi-cloud strategies, and Kubernetes – the cloud’s operating system. We will also delve into the increasing integration of data, AI, and machine learning, which promises to unlock new levels of efficiency, insight, and functionality in the cloud. Let’s explore these fascinating developments and their implications for developer productivity and the broader industry. Autonomous Cloud: The Self-Managing Future One of the most anticipated trends is the autonomous cloud, where the management of cloud services is largely automated. Leveraging advanced AI and machine learning algorithms, autonomous clouds are capable of self-healing, self-configuring, and self-optimizing. They can predict and preemptively address potential issues, reducing the workload on IT teams and improving the reliability of services. As cloud infrastructure complexity grows, the value of such autonomous features will be increasingly critical in maintaining optimal performance and availability. Distributed Cloud: Cloud Computing at the Edge Distributed cloud is another compelling trend that can revolutionize how we consume cloud services. By extending cloud services closer to the source of data or users, distributed cloud reduces latency, enhances security, and provides better compliance with data sovereignty laws. Moreover, it opens a new horizon for applications that require real-time processing and decision-making, such as IoT devices, autonomous vehicles, and next-gen telecommunication technologies like 5G and beyond. Generative AI Tools: Reshaping Development The integration of generative AI tools into cloud platforms is set to redefine the software development lifecycle. These tools can generate code, perform testing, and even create UI designs, dramatically enhancing developer productivity. With AI-assisted development, software production will be faster and more efficient, enabling developers to focus on higher-level design and strategic tasks rather than getting bogged down in minutiae. Expect this technology to democratize software development and inspire a new generation of cloud-native applications. Developer Productivity: Elevation Through Cloud As cloud services become more sophisticated, they are streamlining processes and reducing the technical burdens on developers. Cloud platforms now offer an array of prebuilt services and tools, from databases to AI models, which developers can leverage without needing to build from scratch. Furthermore, the advent of serverless computing and Function-as-a-Service (FaaS) paradigms is freeing developers from infrastructure management, allowing them to focus solely on their application’s logic and functionality. Kubernetes: The OS Container of the Cloud Kubernetes, often regarded as the ‘OS of the cloud,’ is a crucial player in cloud evolution. As a leading platform for managing containerized workloads, Kubernetes offers a highly flexible and scalable solution for deploying, scaling, and managing applications in the cloud. Its open-source and platform-agnostic nature makes it a key enabler of hybrid and multi-cloud strategies. Kubernetes adoption is set to skyrocket further as more organizations realize the benefits of containerization and microservices architectures. Multi-Cloud Strategies: The Best of All Worlds Enterprises are increasingly adopting multi-cloud strategies, leveraging the strengths of different cloud service providers to meet specific needs. This approach ensures they have the flexibility to use the right tool for the right job. It also provides redundancy, protecting businesses from vendor lock-in and potential outages. However, it brings with it a new level of complexity in terms of management and integration. To address this, we can expect to see further development in multi-cloud management platforms and services. Cloud Security: New Approaches for New Threats With the rise in cyber threats and the increasing amount of sensitive data moving to the cloud, the focus on security is becoming more crucial than ever. As a result, we are likely to witness advancements in cloud security practices, with enhanced encryption, AI-driven threat detection, zero-trust architectures, and blockchain-based solutions. The idea is to create an environment where data is safe, no matter where it resides or how it is accessed. DataOps: The New DevOps Data Fabric DataOps, borrowing principles from the agile methodology and DevOps, is an emerging trend that aims to improve the speed, quality, and reliability of data analytics. It involves automated, process-oriented methodologies, tools, and techniques to improve the orchestration, management, and deployment of data transformations. Additionally, as Generative AI models become more complex and numerous, DataOps provides the necessary support for continuous model updates and refinements, automated deployment, and seamless integration of public and corporate data into production environments as per the data sovereignty requirements. Ultimately, DataOps is a critical component in harnessing the full power of Generative AI by managing the data it thrives on. Quantum Computing: The Frontier of the Next Technological Evolution Quantum computing, with its tremendous computational potential, is set to revolutionize technology by integrating with advanced systems like Generative AI. The emergence of quantum-specific hardware, tools, and programming languages will allow developers to harness quantum power effectively. However, accessibility is key to driving this evolution. Simplified APIs and cloud-based quantum computing services are crucial, enabling developers to create quantum algorithms and utilize quantum services without owning complex hardware. This blend of quantum computing, Generative AI, advanced tools, and improved accessibility is poised to ignite the next leap in technological innovation, solving complex problems and to accelerating progress across fields. Sustainability and Green Cloud Computing Lastly, as society becomes more environmentally conscious, the focus on energy-efficient, or ‘green,’ cloud computing will intensify. Green cloud computing is becoming paramount, aiming to minimize the environmental impact of data centers. This involves optimizing energy usage, leveraging renewable energy, and using AI to manage resources efficiently. Emerging tools allow companies to measure their sustainability metrics and aid in developing energy-efficient applications. Simultaneously, advancements in energy-efficient hardware and commitments from cloud providers towards carbon neutrality are bolstering this sustainable shift. As such, the evolution of cloud computing is not just about technological advancement but also about preserving our planet, reinforcing the industry’s drive toward a greener and more responsible future. Summary In this exploration of the future of cloud computing, we’ve delved into key trends such as autonomous and distributed cloud, generative AI tools, and Kubernetes. We’ve also examined the burgeoning role of DataOps in managing AI data and models, the transformative potential of quantum computing, and the increasing convergence of data, AI, and machine learning. As these trends shape the continuously evolving landscape of technology, they promise to drive innovation, enhance efficiency, and unlock unprecedented functionalities. While some trends may gain momentum, others may be reimagined under new visions, born from previous learning, continuously propelling the digital transformation journey forward.
Cypress is an open-source end-to-end testing framework for web applications. It allows developers to write tests in JavaScript to simulate user interactions and verify the behavior of their web applications. Cypress provides a rich set of APIs and a built-in test runner that makes writing, running, and debugging tests easy. On the other hand, Microsoft Teams is a collaborative communication and teamwork platform developed by Microsoft. It is part of the Microsoft 365 suite of productivity tools and is designed to bring together individuals, teams, and organizations to collaborate and communicate effectively. Microsoft Teams integration with Cypress improved visibility into the testing process. Test run notifications and updates can be automatically sent to relevant team members. The Microsoft Teams integration with Cypress allows you to see your test results directly in your Microsoft Teams channels. This blog covers the following: How we can integrate Microsoft Teams with Cypress. How Test run notifications and updates can be automatically sent to Team Channel. Pass / Fail Report on Team Channel. Pre-Condition The user already logged into Microsoft Teams and Cypress Cloud organization. Why Cypress Microsoft Teams Integration Cypress Microsoft Teams integration can be beneficial for teams that use both Cypress for automated testing and Microsoft Teams for collaboration and communication. By integrating Cypress with Microsoft Teams, you can: Real-time test notifications: Receive instant notifications in Microsoft Teams when your Cypress tests run, allowing team members to stay updated on test execution status. Collaborative debugging: Share test failure details, including error messages, screenshots, and logs, directly in a Teams channel. This enables team members to collaborate on debugging and resolving test failures more efficiently. Centralized communication: Keep all relevant test-related communication and discussions within the Microsoft Teams platform, providing a centralized location for team collaboration and reducing the need to switch between different tools. Improved visibility: By integrating Cypress with Microsoft Teams, you can make test execution and results more visible to the entire Team. This increased visibility helps ensure that everyone stays informed about the application’s test coverage and quality. Seamless integration with existing workflows: Microsoft Teams is a widely used collaboration platform, and integrating Cypress with Teams allows you to leverage your existing workflow and tools. You can combine Cypress test notifications with other capabilities of Teams, such as creating tasks, scheduling meetings, or sharing documentation, to streamline your development and testing processes. Centralized test reporting: By integrating Cypress test reports with Microsoft Teams, you can centralize the storage and access of test reports. Team members can access test reports directly within Teams channels, making it easier to review test results, track progress, and share information with stakeholders. Let’s Do Cypress Microsoft Teams Integration Step 1 Login into Cypress Cloud and Open Organization settings. Step 2 Click on the Integrations link from the left side menu. Step 3 Click the ‘Enable’ Option in the Microsoft Teams section. As we enable the option, you’ll navigate to a panel that controls webhooks to communicate between Microsoft Teams and Cypress Cloud. Step 4 Now Open Microsoft Teams and open the particular channel Open the channel where you want to add the webhook and select three dots ••• from the upper-right corner. Select Connectors from the options as shown below Step 5 As we select ‘Connectors’ below, screens open. Step 6 Click on “Configure” against Incoming Hook. Provide the webhook name and Click on the “Create” button. Once we click on create button webhook URL is created can be shown below. Finally, Click on the Done Button. Step 7 Now From the Integration screen in Cypress Cloud-Click on “Add Teams webhook.” Provide the detail in the above screen. Step 8 Enter WebHook Name, Teams webhook URL, Keep option “All runs” selected under drop-down Notification, and finally click on “Add webhook.” The installation is finished once your webhooks have been added and configured. All run results for projects in your organization will be posted by Cypress Cloud to the designated Microsoft Teams channel. Step 9 Set up the Cypress project for demo purposes using the below code. Install Cypress version 12.12.0 using the command. npm install — save-dev cypress Finally, after installation, create a test case. JavaScript /// <reference types="cypress" /> describe("QAAutomationLabs.com", { testIsolation: false }, () => { it("Open URL", () => { cy.visit("https://qaautomationlabs.com/"); }); it("Click on Read More ", () => { cy.get(".staticslider-button").click(); }); it("Verify Particular Blog ", () => { cy.contains( "Running End-to-End Cypress Test cases In Google Cloud Build Pipeline" ); }); it("Click on Blogs", () => { cy.contains("Blog").scrollIntoView().click({ force: true }); }); it("Search the data", () => { cy.get('[id="wp-block-search__input-2"]').scrollIntoView(); cy.get('[id="wp-block-search__input-2"]') .click({ force: true }) .type("cypress"); cy.get('[id="search-icon"]').click({ force: true }); cy.contains("Search Results for: cypress"); }); }); Step 10 In cypress.config.js, Add ‘projectId.' JavaScript const { defineConfig } = require("cypress"); module.exports = defineConfig({ projectId: "projectId", e2e: { setupNodeEvents(on, config) { // implement node event listeners here }, }, }); Step 11 Run the test case using the command. In Cypress the --record and --key flags are used in combination to enable test recording and specify the record key for a project. npx cypress run — record — key xxxx-xxx-xxxxxx–xxxx When running Cypress tests from the command line, the --record flag is used to enable test recording. Test recording allows the test results to be sent to the Cypress Dashboard, where you can view and analyze the test runs. The --key flag is used to specify the record key, which is a unique identifier associated with your project in the Cypress Dashboard. The record key is used to authenticate and link the test results to your project in the dashboard. Step 12 After running the test case, let's see the result in Team Channel. Run the command and see the result in Team. npx cypress run — record — key xxx-xxx-41f0-b763–xx MS Team Notification Result (Pass/Fail Result) Pass Scenario The screenshot below shows a Pass notification sent to MS Team; all five tests are executed successfully. Fail Scenario In the below screenshot, we can see a Pass/Fail notification sent to MS Team. Four tests are executed successfully, and one test failed. Wrap Up Cypress Microsoft Teams Integration allows developers and testers to collaborate efficiently within the Microsoft Teams environment. They can easily share test results, discuss issues, and coordinate their real-time efforts. Teams integration provides improved visibility into the testing process. For example, test run notifications and updates can be automatically sent to relevant team members, keeping everyone informed about the progress and status of the tests.
The article will cover the following topics: Why is Envoy proxy required? Introducing Envoy proxy Envoy proxy architecture with Istio Envoy proxy features Use cases of Envoy proxy Benefits of Envoy proxy Demo video - Deploying Envoy in K8s and configuring as a load balancer Why Is Envoy Proxy Required? Challenges are plenty for organizations moving their applications from monolithic to microservices architecture. Managing and monitoring the sheer number of distributed services across Kubernetes and the public cloud often exhausts app developers, cloud teams, and SREs. Below are some of the major network-level operational hassles of microservices, which shows why Envoy proxy is required. Lack of Secure Network Connection Kubernetes is not inherently secure because services are allowed to talk to each other freely. It poses a great threat to the infrastructure since an attacker who gains access to a pod can move laterally across the network and compromise other services. This can be a huge problem for security teams, as it is harder to ensure the safety and integrity of sensitive data. Also, the traditional perimeter-based firewall approach and intrusion detection systems will not help in such cases. Complying With Security Policies Is a Huge Challenge There is no developer on earth who would enjoy writing security logic to ensure authentication and authorization, instead of brainstorming business problems. However, organizations who want to adhere to policies such as HIPAA or GDPR, ask their developers to write security logic such as mTLS encryption in their applications. Such cases in enterprises will lead to two consequences: frustrated developers, and security policies being implemented locally and in silos. Lack of Visibility Due to Complex Network Topology Typically, microservices are distributed across multiple Kubernetes clusters and cloud providers. Communication between these services within and across cluster boundaries will contribute to a complex network topology in no time. As a result, it becomes hard for Ops teams and SREs to have visibility over the network, which impedes their ability to identify and resolve network issues in a timely manner. This will lead to frequent application downtime and compromised SLA. Complicated Service Discovery Services are often created and destroyed in a dynamic microservices environment. Static configurations provided by old-generation proxies are ineffective in keeping track of services in such an environment. This makes it difficult for application engineers to configure communication logic between services because they have to manually update the configuration file whenever a new service is deployed or deleted. It leads to application developers spending more of their time configuring the networking logic rather than coding the business logic. Inefficient Load Balancing and Traffic Routing It is crucial for platform architects and cloud engineers to ensure effective traffic routing and load balancing between services. However, it is a time-consuming and error-prone process for them to manually configure routing rules and load balancing policies for each service, especially when they have a fleet of them. Also, traditional load balancers with simple algorithms would result in inefficient resource utilization and suboptimal load balancing in the case of microservices. All these lead to increased latency, and service unavailability due to improper traffic routing. With the rise in the adoption of microservices architecture, there was a need for a fast, intelligent proxy that can handle the complex service-to-service connection across the cloud. Introducing Envoy Proxy Envoy is an open-source edge and service proxy, originally developed by Lyft to facilitate their migration from a monolith to cloud-native microservices architecture. It also serves as a communication bus for microservices (refer to Figure 1 below) across the cloud, enabling them to communicate with each other in a rapid, secure, and efficient manner. Envoy proxy abstracts network and security from the application layer to an infrastructure layer. This helps application developers simplify developing cloud-native applications by saving hours spent on configuring network and security logic. Envoy proxy provides advanced load balancing and traffic routing capabilities that are critical to run large, complex distributed applications. Also, the modular architecture of Envoy helps cloud and platform engineers to customize and extend its capabilities. Figure 1: Envoy proxy intercepting traffic between services Envoy Proxy Architecture With Istio Envoy proxies are deployed as sidecar containers alongside application containers. The sidecar proxy then intercepts and takes care of the service-to-service connection (refer to Figure 2 below) and provides a variety of features. This network of proxies is called a data plane, and it is configured and monitored from a control plane provided by Istio. These two components together form the Istio service mesh architecture, which provides a powerful and flexible infrastructure layer for managing and securing microservices. Figure 2: Istio sidecar architecture with Envoy proxy data plane Envoy Proxy Features Envoy proxy offers the following features at a high level. (Visit Envoy docs for more information on the features listed below.) Out-of-process architecture: Envoy proxy runs independently as a separate process apart from the application process. It can be deployed as a sidecar proxy and also as a gateway without requiring any changes to the application. Envoy is also compatible with any application language like Java or C++, which provides greater flexibility for application developers. L3/L4 and L7 filter architecture: Envoy supports filters and allows customizing traffic at the network layer (L3/L4) and at the application layer (L7). This allows for more control over the network traffic and offers granular traffic management capabilities such as TLS client certificate authentication, buffering, rate limiting, and routing/forwarding. HTTP/2 and HTTP/3 support: Envoy supports HTTP/1.1, HTTP/2, and HTTP/3 (currently in alpha) protocols. This enables seamless communication between clients and target servers using different versions of HTTP. HTTP L7 routing: Envoy's HTTP L7 routing subsystem can route and redirect requests based on various criteria, such as path, authority, and content type. This feature is useful for building front/edge proxies and service-to-service meshes. gRPC support: Envoy supports gRPC, a Google RPC framework that uses HTTP/2 or above as its underlying transport. Envoy can act as a routing and load-balancing substrate for gRPC requests and responses. Service discovery and dynamic configuration: Envoy supports service discovery and dynamic configuration through a layered set of APIs that provide dynamic updates about backend hosts, clusters, routing, listening sockets, and cryptographic material. This allows for centralized management and simpler deployment, with options for DNS resolution or static config files. Health checking: For building an Envoy mesh, service discovery is treated as an eventually consistent process. Envoy has a health-checking subsystem that can perform active and passive health checks to determine healthy load-balancing targets. Advanced load balancing: Envoy's self-contained proxy architecture allows it to implement advanced load-balancing techniques, such as automatic retries, circuit breaking, request shadowing, and outlier detection, in one place, accessible to any application. Front/edge proxy support: Using the same software at the edge provides benefits such as observability, management, and identical service discovery and load-balancing algorithms. Envoy's feature set makes it well-suited as an edge proxy for most modern web application use cases, including TLS termination, support for multiple HTTP versions, and HTTP L7 routing. Best-in-class observability: Envoy provides robust statistics support for all subsystems and supports distributed tracing via third-party providers, making it easier for SREs and Ops teams to monitor and debug problems occurring at both the network and application levels. Given its powerful set of features, Envoy proxy has become a popular choice for organizations to manage and secure multicloud and multicluster apps. In practice, it has two main use cases. Use Cases of Envoy Proxy Envoy proxy can be used as both a sidecar service proxy and a gateway. Envoy Sidecar Proxy As we have seen in the Isito architecture, Envoy proxy constitutes the data plane and manages the traffic flow between services deployed in the mesh. The sidecar proxy provides features such as service discovery, load balancing, traffic routing, etc., and offers visibility and security to the network of microservices. Envoy Gateway as API Envoy proxy can be deployed as an API Gateway and as an ingress (please refer to the Envoy Gateway project). Envoy Gateway is deployed at the edge of the cluster to manage external traffic flowing into the cluster and between multicloud applications (north-south traffic). Envoy Gateway helped application developers who were toiling to configure Envoy proxy (Istio-native) as API and ingress controller, instead of purchasing a third-party solution like NGINX. With its implementation, they have a central location to configure and manage ingress and egress traffic and apply security policies such as authentication and access control. Below is a diagram of Envoy Gateway architecture and its components. Envoy Gateway architecture (Source) Benefits of Envoy Proxy Envoy’s ability to abstract network and security layers offers several benefits for IT teams such as developers, SREs, cloud engineers, and platform teams. Following are a few of them. Effective Network Abstraction The out-of-process architecture of Envoy helps it to abstract the network layer from the application to its own infrastructure layer. This allows for faster deployment for application developers, while also providing a central plane to manage communication between services. Fine-Grained Traffic Management With its support for the network (L3/L4) and application (L7) layers, Envoy provides flexible and granular traffic routing, such as traffic splitting, retry policies, and load balancing. Ensure Zero Trust Security at L4/L7 Layers Envoy proxy helps to implement authentication among services inside a cluster with stronger identity verification mechanisms like mTLS and JWT. You can achieve authorization at the L7 layer with Envoy proxy easily and ensure zero trust. (You can implement AuthN/Z policies with Istio service mesh — the control plane for Envoy.) Control East-West and North-South Traffic for Multicloud Apps Since enterprises deploy their applications into multiple clouds, it is important to understand and control the traffic or communication in and out of the data centers. Since Envoy proxy can be used as a sidecar and also an API gateway, it can help manage east-west traffic and also north-south traffic, respectively. Monitor Traffic and Ensure Optimum Platform Performance Envoy aims to make the network understandable by emitting statistics, which are divided into three categories: downstream statistics for incoming requests, upstream statistics for outgoing requests, and server statistics for describing the Envoy server instance. Envoy also provides logs and metrics that provide insights into traffic flow between services, which is also helpful for SREs and Ops teams to quickly detect and resolve any performance issues. Video: Get Started With Envoy Proxy Deploying Envoy in k8s and Configuring as Load Balancer The below video discusses different deployment types and their use cases, and it shows a demo of Envoy deployment into Kubernetes and how to set it as a load balancer (edge proxy).
In a previous blog post, I demonstrated how to deploy Jakarta EE applications as serverless services with AWS Fargate. The Jakarta EE application I used as an example in the previously mentioned post was a basic one. Now, it's time to go further and see what it takes to do the same thing with more complex Jakarta EE applications involving a wide variety of components like RESTful, CDI, Enterprise Beans, and Server Pages. Jakarta EE Full Platform is the superset of what has been commonly called "enterprise Java." While not officially coined as a Jakarta EE profile, such as the Core Profile or the Web Profile, the Full Platform includes the whole bunch of the Jakarta EE specifications implementations, from the most essential ones, like Servlet, to the most uncharted ones, like RMI or SAAJ. In this article, we'll be using Jakarta EE 10 and its implementation by WildFly 27.0.1.Final release. The Project The project used in order to illustrate this blog ticket may be found here. It consists of a Maven-based Jakarta EE 10 sample application for items management involving the following components: a RESTful 3.1 endpoint that aims at CRUD-ing items; a CDI 4.0 component on the domain layer; an Enterprise Bean Lite 4.0 component implementing the facade design pattern; a Faces 4.0 front-end providing an easy way to test the application. I tried to carefully choose these components in order to have as much as a reasonable variety of Jakarta EE implementations, such that to justify the deployment of a Full Platform server like WildFly. And while some analysts still think that WildFly, like all the existent Jakarta EE implementations, is a heavy platform, I'll demonstrate how fast and easy it can be deployed on AWS in a Fargate serverless context. But, before going into infrastructure and deployment considerations, let's first have a detailed analysis of each project's layer. The Faces 4.0 Front End Our front-end layer consists of an XHTML-based view using Facelets. As the reader might know, Jakarta EE Faces and JSF (Java Server Faces) were historically used to support two view technologies: JSP (Java Server Pages) and Facelets. The new 4.0 release is deprecating the JSP support, and it adds a new programmatic one based on the jakarta.faces.view.facelets.Facelet interface. While this new API might be practical for developers who prefer having a pure Java view definition, it always has been considered that a clean software architecture has to separate the business logic from the visual presentation, including using a different declarative notation, like Facelets. That's for this reason that, in this example, I didn't give way to the temptation of using this new Faces API, but I kept using the old good Facelets notation and one of its most useful features: the templating. The listing below shows the file template.xhtml and the way that it defines our Facelets template: HTML <html xmlns:ui="jakarta.faces.facelets" xmlns:h="jakarta.faces.html"> <h:head> <h:outputStylesheet name="style.css"/> </h:head> <h:body> <div class="page"> <div class="header"> <ui:include src="header.xhtml"/> </div> <div class="content"> <ui:insert name="content">Default Content</ui:insert> </div> <div class="footer"> <ui:include src="footer.xhtml"/> </div> </div> </h:body> </html> As we can see, our HTML page is divided into three regions, a header, content, and a footer. Each region is defined in a separate .xhtml file. Don't hesitate to open the files named header.xhtmland, respectively, footer.xhtml located in the src/main/webapp/resources/templates directory and to see how each one of these regions are defined. The region named content is defined in the index.xhtml file, located in the src/main/webapp directory. HTML <html xmlns:ui="jakarta.faces.facelets" xmlns:f="jakarta.faces.core" xmlns:h="jakarta.faces.html" xmlns:mc="jakarta.faces.composite/mycomponents"> <body> <ui:composition template="/templates/template.xhtml"> <ui:define name="content"> <mc:item key = "#{item.key}" value ="#{item.value}" actionListener="#{itemManager.save()}"/> <h:form id="items"> <h:dataTable value="#{itemManager.itemList}" var="it" styleClass="table" headerClass="table-header" rowClasses="table-odd-row,table-even-row"> <h:column> <f:facet name="header">Key</f:facet> <h:outputText value="#{it.key}"/> </h:column> <h:column> <f:facet name="header">Value</f:facet> <h:outputText value="#{it.value}"/> </h:column> <h:column> <f:facet name="header">Delete</f:facet> <h:commandButton actionListener="#{itemManager.delete(it)}" styleClass="buttons" value="Delete"/> </h:column> </h:dataTable> </h:form> </ui:define> </ui:composition> </body> </html> The ui:composition and ui:define tags in the listing above state that here we're defining the element named content in the template.xhtml file. The namespace jakarta.faces.composite is a new feature of the JSF 2.3, included in Faces 4.0 as well, and aims at defining composite components. The tag mc:item in the listing below references such a composite component, while its definition is shown in the file item.xhtml located in the src/main/webapp/resources/mycomponents directory. HTML <html xmlns:cc="jakarta.faces.composite" xmlns:h="jakarta.faces.html"> <cc:interface> <cc:attribute name="key"/> <cc:attribute name="value"/> <cc:attribute name="actionListener" method-signature="void action(javax.faces.event.Event)" targets="form:saveButton"/> </cc:interface> <cc:implementation> <h:form id="form"> <h:panelGrid columns="2"> <h:outputText value="Key:" for="inputTextKey"/> <h:inputText value="#{cc.attrs.key}" id="inputTextKey"/> <h:outputText value="Value:" for="inputTextValue"/> <h:inputText value="#{cc.attrs.value}" id="inputTextValue"/> </h:panelGrid> <h:commandButton id="saveButton" value="Save"/> <h:messages errorStyle="color: red" infoStyle="color: green" globalOnly="true" /> </h:form> </cc:implementation> </html> Our composite component here above defines a form consisting of a grid with two columns and a command button. This simple interface allows us to CRUD items. At this point of the presentation, I have to apologize to the readers for the poor styling of the visual part of the project. As a matter of fact, I have to admit that I completely lack any graphical or visual designing skills. However, I tried to style them as much as I could the presentation and the maximum I could come to is in the file style.css located in the src/main/webapp/resource directory. Please don't hesitate to adapt and customize this file such that the visual presentation of the front end becomes more attractive. The CDI 4.0 Component Layer The listing below shows the class ItemManager which instance is driving the items CRUD operations: Java @Model public class ItemManager { @Inject private ItemFacade itemFacade; @Produces @Named private Item item = new Item(); @Produces @Named public List<Item> getItemList() { return itemFacade.getItemList(); } public void save() { itemFacade.addToList(item); FacesMessage facesMsg = new FacesMessage(FacesMessage.SEVERITY_INFO, "Added item " + item.getKey(), null); FacesContext.getCurrentInstance().addMessage(null, facesMsg); item = new Item(); } public void delete(Item item) { itemFacade.removeFromList(item); } public boolean isFull() { return (itemFacade.getItemList().size() > 0); } } Notice how annotations like @Produces and @Named are used in the listing above in order to produce new bean instances and, respectively, to export them to EL (Expression Language), such that to be used in the view component in expressions like #{item.key} and #{item.value} or #{itemList}. The Enterprise Bean 4.0 Component Layer The ItemManager CDI bean is injecting an instance of the ItemFacade, which is the effective service performing the items CRUD operations. This service is implemented as a stateless session Enterprise Bean, as shown by the listing below: Java @Singleton @Named public class ItemFacade implements Serializable { private List<Item> itemList; public ItemFacade() { } @PostConstruct public void postConstruct() { itemList = new ArrayList<Item>(); } public List<Item> getItemList() { return itemList; } public void setItemList(List<Item> itemList) { this.itemList = itemList; } public int addToList(Item item) { itemList.add(item); return itemList.size(); } public int addToList(String key, String value) { itemList.add(new Item(key, value)); return itemList.size(); } public int removeFromList(Item item) { itemList.remove(item); return itemList.size(); } public int removeFromList(int idx) { itemList.remove(idx); return itemList.size(); } public void removeAll() { itemList.clear(); } } We could imagine that the facade component would perform some persistence operations like storing the items in a database or sending JMS messages to a message broker. In our simple example, we only store them in a collection. The RESTful 3.1 Endpoint This endpoint exposes the operations required for CRUD items, as shown in the table below: Resource HTTP Request Action Java Method /items/list GET Get the full list of the currently registered items public List<Item> getItems() /items POST Create a new item public Response createItem() /items/{id} GET Get the item identified by its key, passed as a path parameter public Item getItemByPathParam() /items GET Get the item identified by its key, passed as a query parameter public Item getItemByQueryId() /items/{id} DELETE Remove the item identified by its key public Response removeItem() Nothing new here. This is just a very classical RESTful endpoint, as we use to see often. Testing Locally Before deploying to the cloud, we need to make sure first that our application works as expected. As it isn't worth deploying it as long as it doesn't work. Our pom.xml file defines several profiles, as follows: arq-managed: This profile defines an Arquillian integration test in order to deploy our WAR on a managed WildFly server; arq-remote: This profile defines an Arquillian integration test in order to deploy our WAR on a managed WildFly server; arq-docker: This profile defines an Arquillian integration test in order to deploy our WAR on a WildFly server running in a Docker container; docker: This profile is useful to test the Faces front-end in an interactive way. It doesn't use Arquillian, but it only deploys the WAR on a WildFly application server running in a Docker container, giving you the possibility to use your preferred browser and to perform manual tests; aws: This profile allows us to execute integration tests without Arquillian once that the application has been deployed on the cloud. For example, in order to execute Arquillian integration tests in a managed WildFly container, execute the following command: Shell $ mvn -Parq-managed clean package verify A WildFly server will be downloaded and executed locally, and Arquillian will deploy in it the WAR before executing the integration tests, which should succeed. In the same way, if you want to perform tests in a WildFly server running in a Docker container, you may execute the following command: Shell $ mvn -Parq-docker clean package verify If you want to perform manual tests without Arquillian, you need to do the following: Shell $ mvn clean package docker:build docker:run This will start a Docker container running a WildFly server, and you will be able to connect to this URL in order to check if your Faces front end works. Pushing to the Cloud In a previous blog ticket, I've already investigated the use of the AWS Fargate serverless service as an efficient way to deploy Jakarta EE applications on the cloud. It is this same service that we'll be using here to deploy the WildFly application server, together with our Jakarta EE 10 application. Assuming that copilot is already installed on your box, the only thing to do is to run the following command: Shell $ copilot init --app duke-app --name duke --type "Load Balanced Web Service" --dockerfile ./DockerfileWithWAR --port 8080 --deploy The execution of this command might take some time, depending on your bandwidth. Once finished, you'll see a message similar to the following: - You can access your service at http://duke-Publi-V5NKFZU77FWB-1356331722.eu-west-3.elb.amazonaws.com over the internet. Now you can connect to this URL and check whether the Faces front-end works as was the case when tested locally. If you prefer to execute integration tests, you can use the aws profile, as follows: Shell $ mvn -Paws clean package verify Don't forget to clean up your environment as soon as you've finished playing: Shell $ copilot app delete Again, the execution of this command might take some time. Enjoy!
Amazon Web Services (AWS) Lambda is an incredibly useful cloud computing platform, allowing businesses to run their code without managing infrastructure. However, the invocation type of Lambda functions can be confusing for newcomers. By understanding the key differences between asynchronous and synchronous invocations, you'll be able to set up your Lambda functions for maximum efficiency. Here's a deep dive into the mysteries of AWS Lambda invocation. Overview of the AWS Lambda Function Invocation Process The AWS Lambda Function Invocation Process begins when an event triggers the function. This event can come from a variety of sources, including HTTP requests, changes to data in an Amazon S3 bucket, or updates to a DynamoDB table. Once the event occurs, AWS Lambda automatically provisions and runs the necessary compute resources to process the request. There are two types of invocation methods in AWS Lambda: Synchronous and Asynchronous. The main difference between these two methods is the way in which they handle the response from the function. In synchronous invocation, the caller waits for the response from the function before continuing. This means that the function must execute completely before the caller can proceed. On the other hand, asynchronous invocation immediately returns a response to the caller, allowing it to continue with other tasks while the function executes in the background. Synchronous invocation is ideal for situations where a response is required immediately, such as when you're building a user-facing application or an API. Asynchronous invocation, on the other hand, is useful when you don't need an immediate response or when you have long-running tasks that require more time to execute. Regardless of which method you choose, AWS Lambda allows you to easily scale your functions to meet demand and pay only for the compute time that you consume. By choosing the right invocation method, you can optimize the performance of your Lambda functions and reduce costs. Asynchronous vs. Synchronous Invocation Let's take a closer look at some scenarios where you might choose to use synchronous or asynchronous invocation. Synchronous invocation is well-suited for tasks that require immediate feedback. For instance, if you're building an e-commerce website and a customer places an order, you'll want to know immediately whether the order was successful or not. In this case, you would use a synchronous invocation to ensure that the customer receives a response as soon as possible. Here is an example AWS CLI command to invoke a function named "aFunction" with input data from a file named input.json: Plain Text aws lambda invoke --function-name aFunction --payload file://input.json output.txt Asynchronous invocation, on the other hand, is great for tasks that involve long-running processes or batch jobs. For example, suppose that you're processing a large amount of data and need to perform some complex computations on it. In this case, you could use an asynchronous invocation to kick off the computation and return a response to the user while the computation continues to run in the background. For asynchronous invocation, Lambda places the event in a queue and returns a success response without additional information. A separate process reads events from the queue and sends them to your function. To invoke a function asynchronously, set the invocation type parameter to Event. Plain Text aws lambda invoke --function-name my-function --invocation-type Event --function-name aFunction --payload file://input.json output.txt Another example of when to use asynchronous invocation is when you're integrating multiple services together. Suppose that you have a workflow that involves several different services, each of which must be invoked in turn. By using asynchronous invocation, you can decouple the services from one another and allow them to execute independently, improving overall performance and scalability. In conclusion, choosing the right invocation method is critical for optimizing the performance and scalability of your AWS Lambda functions. By understanding the differences between synchronous and asynchronous invocation and using them appropriately, you can ensure that your applications are responsive, efficient, and cost-effective. Error Handling and Automatic Retries Another important feature of AWS Lambda is its built-in error handling and automatic retries. When a function encounters an error, Lambda automatically retries the execution using the same event data. This can be useful for transient errors such as network timeouts or temporary resource constraints. You can control the number of retries and the time between retries using the function's configuration settings. If the retries are unsuccessful, Lambda can either discard the event or send it to a dead-letter queue (DLQ) for further analysis. AWS recommends the use of Lambda Destinations. Lambda Destinations is a feature that allows you to define a destination for the asynchronous invocations of your AWS Lambda function. With Lambda Destinations, you can route failed invocations to a DLQ or another function for further processing. You can also send successful invocations to a queue or a stream for downstream processing. This feature provides more visibility and control over the behavior of your function, especially when handling asynchronous invocations at scale. By defining destinations for your function's invocations, you can monitor and troubleshoot issues more effectively and create more resilient serverless architectures. A DLQ is a feature in AWS that allows you to capture and store events or messages that could not be processed by a function. When a function fails to process an event, it can send the event to a DLQ instead of discarding it. This can be useful for debugging and troubleshooting purposes because you can analyze the failed events to determine the cause of the failure. In addition, you can set up alerts to notify you when events are sent to the DLQ, which can help you proactively identify and resolve issues. The DLQ can be configured for both synchronous and asynchronous invocations and can be used in conjunction with Lambda Destinations for more advanced error-handling scenarios. Overall, the dead letter queue is a powerful tool that can help you build robust and reliable serverless applications on AWS. How To Choose Between Asynchronous and Synchronous Invocation? Choosing the right invocation method between synchronous and asynchronous depends on the specific needs of your application. If your function is short-lived and you need immediate feedback from it, then synchronous invocation is the way to go. It provides a simple and straightforward way of executing functions and handling errors. However, if your function takes longer to execute or if you're integrating multiple services together, asynchronous invocation may be the better choice. It allows for greater scalability and performance since the caller is not blocked while long-running tasks are executed in the background. Additionally, you can use a combination of both synchronous and asynchronous invocation methods depending on your application's requirements. By understanding the strengths and weaknesses of each method, you can choose the right invocation method that optimizes the performance and scalability of your AWS Lambda functions.
Boris Zaikin
Senior Software Cloud Architect,
Nordcloud GmBH
Ranga Karanam
Best Selling Instructor on Udemy with 1 MILLION Students,
in28Minutes.com
Samir Behara
Senior Cloud Infrastructure Architect,
AWS
Pratik Prakash
Master Software Engineer (SDE-IV),
Capital One