Containers Trend Report. Explore the current state of containers, containerization strategies, and modernizing architecture.
Securing Your Software Supply Chain with JFrog and Azure. Leave with a roadmap for keeping your company and customers safe.
Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
Throughout this series of video tutorials, we will explore the procedure of sending and receiving messages between a Java consumer and producer and the Apache Kafka server. Java Consumer Code to Consume Messages From Apache Kafka Server This video tutorial explains how to write a consumer that will consume messages from a Kafka Server installed in the Amazon EC2 Instance. Kafka Producer Send Messages to a Single Topic Partition Explore how Kafka producer sends all messages to a single selected partition by: The producer can select the partition of their choice in a topic where the producer wants to publish the message. All the messages will be published to a single partition (P0). The product can send messages to the selected partition. We have to pass the partition number and the key. Java Product Code to Send Messages to the Apache Server Installed in the Amazon EC2 Instance In this tutorial video, you will discover the process of writing producer code in Java. This code will enable you to send messages to a Kafka server that has been deployed on an Amazon EC2 instance. Java Producer and Consumer Code to Send/Receive messages To/From the Apache Kafka Server (EC2 Instance) Learn how to utilize Java producer and consumer code to send and receive messages to and from the Apache Kafka server (EC2 Instance). Java Program to Send Custom Object Into Kafka Topic and Consume Custom Object From Kafka Topic Here is a Java program that allows you to send a custom object to a Kafka topic and also consumes a custom object from a Kafka topic. Java Program to Send/Consume Custom Object Into/From Kafka Server Running on Amazon EC2 Instance Next, you can learn how to utilize a Java program to send either a JSON object or a custom object to a Kafka topic. Additionally, you will also be guided on how to consume a JSON object or a custom object from a Kafka topic.
I contributed to PCRE and wrote two smaller regular expression engines, but I still regularly learn something new about this topic. This time, it's about a regex that never matches. When using character classes, you can specify the allowed characters in brackets, such as [a-z] or [aeiouy]. But what happens if the character class is empty? Popular regex engines treat the empty brackets [] differently. In JavaScript, they never match. This is a valid JavaScript code, and it always prints false regardless of the value of str: JavaScript const str = 'a'; console.log(/[]/.test(str)); However, in Java, PHP (PCRE), Go, and Python, the same regex throws an exception: Java // Java @Test void testRegex1() { PatternSyntaxException e = assertThrows(PatternSyntaxException.class, () -> Pattern.compile("[]")); assertEquals("Unclosed character class", e.getDescription()); } PHP <?php ini_set('display_errors', 1); error_reporting(E_ALL); // Emits a warning: preg_match(): Compilation failed: missing terminating ] for character class echo preg_match('/[]/', ']') ? 'Match ' : 'No match'; Python # Python import re re.compile('[]') # throws "unterminated character set" In these languages, you can put the closing bracket right after the opening bracket to avoid escaping the former: Java // Java @Test void testRegex2() { Pattern p = Pattern.compile("[]]"); Matcher m = p.matcher("]"); assertTrue(m.matches()); } PHP <?php echo preg_match('/[]]/', ']', $m) ? 'Match ' : 'No match'; // Outputs 'Match' print_r($m); Python # Python import re print(re.match('[]]', ']')) # outputs the Match object Go // Go package main import ( "fmt" "regexp" ) func main() { matched, err := regexp.MatchString(`[]]`, "]") fmt.Println(matched, err) } This won't work in JavaScript because the first ] is interpreted as the end of the character class there, so the same regular expression in JavaScript means an empty character class that never matches, followed by a closing bracket. As a result, the regular expression never finds the closing bracket: JavaScript // JavaScript console.log(/[]]/.test(']')); // outputs false If you negate the empty character class with ^ in JavaScript, it will match any character, including newlines: console.log(/[^]/.test('')); // outputs false console.log(/[^]/.test('a')); // outputs true console.log(/[^]/.test('\n')); // outputs true Again, this is an invalid regex in other languages. PCRE can emulate the JavaScript behavior if you pass the PCRE2_ALLOW_EMPTY_CLASS option to pcre_compile. PHP never passes this flag. If you want to match an opening or a closing bracket, this somewhat cryptic regular expression will help you in Java, PHP, Python, or Go: [][]. The first opening bracket starts the character class, which includes the literal closing bracket and the literal opening bracket, and finally, the last closing bracket ends the class. In JavaScript, you need to escape the closing bracket like this: [\][] console.log(/[\][]/.test('[')); // outputs true console.log(/[\][]/.test(']')); // outputs true In Aba Search and Replace, I chose to support the syntax used in Java/PHP/Python/Go. There are many other ways to construct a regular expression that always fails, in case you need it. So it makes sense to use this syntax for a literal closing bracket. Replacing text in several files used to be a tedious and error-prone task. Aba Search and Replace solves the problem, allowing you to correct errors on your web pages, replace banners and copyright notices, change method names, and perform other text-processing tasks.
Last week, I decided to see the capabilities of OpenAI's image generation. However, I noticed that one has to pay to use the web interface, while the API was free, even though rate-limited. Dall.E offers Node.js and Python samples, but I wanted to keep learning Rust. So far, I've created a REST API. In this post, I want to describe how you can create a Web app with server-side rendering. The Context Tokio is a runtime for asynchronous programming for Rust; Axum is a web framework that leverages the former. I already used Axum for the previous REST API, so I decided to continue. A server-side rendering Web app is similar to a REST API. The only difference is that the former returns HTML pages, and the latter JSON payloads. From an architectural point of view, there's no difference; from a development one, however, it plays a huge role. There's no visual requirement in JSON, so ordering is not an issue. You get a struct; you serialize it, and you are done. You can even do it manually; it's no big deal - though a bit boring. On the other hand, HTML requires a precise ordering of the tags: if you create it manually, maintenance is going to be a nightmare. We invented templating to generate order-sensitive code with code. While templating is probably age-old, PHP was the language to popularize it. One writes regular HTML and, when necessary, adds the snippets that need to be dynamically interpreted. In the JVM world, I used JSPs and Apache Velocity, the latter, to generate RTF documents. Templating in Axum As I mentioned above, I want to continue using Axum. Axum doesn't offer any templating solution out-of-the-box, but it allows integrating any solution through its API. Here is a small sample of templating libraries that I found for Rust: handlebars-rust, based on Handlebars liquid, based on Liquid Tera, based on Jinja, as the next two askama MiniJinja etc. As a developer, however, I'm lazy by essence, and I wanted something integrated with Axum out of the box. A quick Google search lead me to axum-template, which seems pretty new but very dynamic. The library is an abstraction over handlebars, askama, and minijinja. You can use the API and change implementation whenever you want. axum-template in Short Setting up axum-template is relatively straightforward. First, we add the dependency to Cargo: Shell cargo add axum-template Then, we create an engine depending on the underlying implementation and configure Axum to use it. Here, I'm using Jinja: Rust type AppEngine = Engine<Environment<'static>>; //1 #[derive(Clone, FromRef)] struct AppState { //2 engine: AppEngine, } #[tokio::main] async fn main() { let mut jinja = Environment::new(); //3 jinja.set_source(Source::from_path("templates")); //4 let app = Router::new() .route("/", get(home)) .with_state(AppState { //5 engine: Engine::from(jinja), }); } Create a type alias. Create a dedicated structure to hold the engine state. Create a Jinja-specific environment. Configure the folder to read templates from. The path is relative to the location where you start the binary; it shouldn't be part of the src folder. I spent a nontrivial amount of time to realize it. Configure Axum to use the engine. Here are the base items: Engine is a facade over the templating library Templates are stored in a hashtable-like structure. With the MiniJinja implementation, according to the configuration above, Key is simply the filename, e.g., home.html The final S parameter has no requirement. The library will read its attributes and use them to fill the template. I won't go into the details of the template itself, as the documentation is quite good. The impl Return It has nothing to do with templating, but this mini-project allowed me to ponder the impl return type. In my previous REST project, I noticed that Axum handler functions return impl, but I didn't think about it. It's indeed pretty simple: If your function returns a type that implements MyTrait, you can write its return type as -> impl MyTrait. This can help simplify your type signatures quite a lot! - Rust By Example However, it has interesting consequences. If you return a single type, it works like a charm. However, if you return more than one, you either need a common trait across all returned types or to be explicit about it. Here's the original sample: Rust async fn call(engine: AppEngine, Form(state): Form<InitialPageState>) -> impl IntoResponse { RenderHtml(Key("home.html".to_owned()), engine, state) } If the page state needs to differentiate between success and error, we must create two dedicated structures. Rust async fn call(engine: AppEngine, Form(state): Form<InitialPageState>) -> Response { //1 let page_state = PageState::from(state); if page_state.either.is_left() { RenderHtml(Key("home.html".to_owned()), engine, page_state.either.left().unwrap()).into_response() //2 } else { RenderHtml(Key("home.html".to_owned()), engine, page_state.either.right().unwrap()).into_response() //2 } } Cannot use impl IntoResponse; need to use the explicit Response type Explicit transform the return value to Response Using the Application You can build from the source or run the Docker image, available at DockerHub. The only requirement is to provide an OpenAI authentication token via an environment variable: Shell docker run -it --rm -p 3000:3000 -e OPENAI_TOKEN=... nfrankel/rust-dalle:0.1.0 Enjoy! Conclusion This small project allowed me to discover another side of Rust: HTML templating with Axum. It's not the usual use case for Rust, but it's part of it anyway. On the Dall.E side, I was not particularly impressed with the capabilities. Perhaps I didn't manage to describe the results in the right way. I'll need to up my prompt engineering skills. In any case, I'm happy that I developed the interface, if only for fun. The complete source code for this post can be found on GitHub. To Go Further: axum-template Image generation API
I recently presented this talk at the Conf42 Golang 2023 and I thought it might be a good idea to turn it into a blog post for folks who don't want to spend 40+ mins watching the talk (it's ok, I understand) or just staring at slides trying to imagine what I was saying. So, here you go! By the way, you are still welcome to watch the talk or download the slides! There are a lot of great talks that you can get from this playlist. This talk was geared toward folks who are looking to get started with Redis and Go. Or perhaps you are already experienced with both these topics – in that case, it might be a good refresher! To that extent, I had a very simple agenda. I started off by setting the context about Redis and Go. Provided an overview of the Go and Redis ecosystem, including the client options you’ve got Followed by some hands-on stuff Wrapped up with some tips/tricks and resources I Love Redis and Go! Since its release in 2009, it did not take Redis too long to win the hearts and minds of the developer community! As per DB-engines trends statistics, Redis has been topping the charts since 2013. And on Stack Overflow annual survey, it’s been voted the most loved database for 5 years in a row. Go has become the language of the Cloud. It powers many cloud-native projects (apparently, 75% of CNCF projects are written in Go) including Docker, Kubernetes, Prometheus, Terraform, etc. In fact, there are many databases written in Go - like InfluxDB, etcd, Vitess, TiDB, etc. Go also caters to a wide variety of general-purpose use cases: Web apps and APIs Data processing pipelines Infrastructure as code SRE/DevOps solutions Command line apps (this is a really popular one!) And more No wonder Go has become such a popular programming language! Now you might be thinking, "Hey, Go is down at the bottom." But if you notice carefully, it is the only statically-typed lang after Rust (of course there are C# and Kotlin down there as well), and this is from 2022. If you look at data from 2021 to 2018, you will notice that Go has maintained its top 5 spot. Go and Redis have a few things in common, but to me, simplicity is the one that really stands out to me. Simplicity Redis is a key-value store, but the values can be any of these data structures that you see. These are all data structures that we as developers use every day - lists, sets, maps, etc. Redis just feels like an extension of these core programming language constructs. With Go, this comes in various forms: Excellent tooling A comprehensive standard library Easy-to-use concurrency primitives And sometimes it's in the form of not bloating the language with unnecessary features. To cite an example, it took a while for generics to be added to the language. Now, I am not trying to trick you into thinking that Go is simple; or for that matter, that any programming language is simple. But with Go, the goal is to give you simple components to help build complex things and hide complexity behind a simple facade. There are folks who have explained this in great detail (and much better than I can!). I would really encourage you to check out this talk by Rob Pike (and the slides), the co-creator of Go (it's from 2015, but still very much applicable to the essence and spirit of Go). Redis 101 A quick intro to Redis (some of the key points): Data structure server: At its core, Redis is nothing but a key-value store, where the key can be string or even a binary. The important thing to note is that as far as the value is concerned, you can choose from a variety of data structures such as strings,hashes, lists, sets, etc. Redis is not just a cache: it’s a really solid messaging platform as well. HA: You can configure Redis to be highly available by setting up primary-replica replication, or take it a step further with Redis Cluster. Persistence: Redis is primarily in-memory but you can configure it to persist to disk as well. There are solutions like Amazon MemoryDB that can actually take it a notch further (thanks to its distributed transactional log). Since it's open-source and wildly popular, you can get offerings from pretty much every cloud provider – big or small, or even run it on Kubernetes, on cloud, on-prem, or hybrid mode. If you want to put Redis in production, you can rest assured that there is no dearth of options for you to run and operate it. Redis Data Types What you see here is a list of core data structures: A string seems really simple, but is quite powerful. They can be used for something as simple as storing a key-value pair to implementing advanced patterns like distributed locking and rate-limiting. A hash is similar to a map in Java, or dictionary in Python. It is used to store object-like structures like user profiles, customer info, etc. A set behaves like its mathematical version: it helps you maintain a unique set of items along with the ability to list them, count them, execute unions, intersections, and so on. Think of sorted sets like this big brother to a set - They make it possible to implement things like leaderboards, which is very useful in areas like gaming. For example, you can store player scores in a sorted set and when you need to get the top 10 players, you can simply invoke specific sorted set commands and get that data. The beauty is that sorting happens on the database side, so there is no client-side logic you need to apply. Lists are a really versatile data structure as well. You can use them to store many things, but using them as a worker queue is very common. There are popular open-source solutions such as sidekiq and celery that already support Redis as a backend for job queuing solutions. Redis streams (added in Redis 5) is used for streaming use-cases. Also, ephemeral messaging with pub/sub - it's a publish-broadcast mechanism where you can send/receive messages to/from channels. There is also Geospatial data structure and a really cool one called Hyperloglog which is an alternative to a traditional set. It can store millions of items while optimizing for data storage and you can count the number of unique items with really high accuracy. Go Clients for Redis These are the most popular Go clients for Redis. go-redis is by far the most popular client. It has what you’d expect. Features, decent documentation, active community. Moving this under the official Redis GitHub org is just icing on the cake! redigo is a fairly stable and tenured client that supports all the standard Redis data types and features such as transactions, pipelining, etc. It is also used to implement other Go client libraries such as redisearch-go and Redis TimeSeries Go client. That being said, its API is a bit too flexible. While some may prefer that, I feel that it’s not a good fit when I am using a type-safe language like Go (that’s just my personal opinion). But the biggest drawback to me is that it does not support Redis Cluster! rueidis is a relatively new (at the time of writing) but quickly evolving client library. It supports RESP3 protocol and client-side caching and supports a variety of Redis Modules. As far as the API is concerned, this client adopts an interesting approach. It provides a Do function as well (like redigo client), but the way it creates the command is via a builder pattern - this retains strong type checking (unlike redigo). To be honest with you, I haven't used this client a lot, and it looks like it's packed with a lot of features. So, I can't complain much as of now! For those who are looking for deeper performance numbers – this benchmark comparison with the go-redis library might be interesting. Client Ecosystem Now let's take a look from an ecosystem perspective. To clarify, the point here is not chest-thumping based on GitHub stars - it's just to give you a sense of things. For folks not using Go and Redis, the popularity of the Go client might come as a surprise. Java workloads form a huge chunk of the Redis workloads and Jedis is the bread-and-butter client when it comes to Java apps with Redis (and it's pretty old). But I was surprised to see redisson topping the charts, followed by go-redis (yay!), (two) Node.js clients, Python, and finally back to Java. Another thing to note is that I was looking for more than 10000 stars. At the time of writing, the phpredis client was close to 9600 stars. Now, time for some practical stuff. Demos During the demo, I covered: A walk-through of the Go Redis client: Basics such as connecting to Redis Using common data types like string with TTL Use hash and struct support in go-redis How to use set as well as pipelining technique Hyperloglog and how does it differ from a Set: func hllandset() { pipe := client.Pipeline() ips := []string{} for i := 1; i <= 10_00_000; i++ { ips = append(ips, fake.IP(fake.WithIPv4())) } pipe.SAdd(context.Background(), "set_of_ips", ips) pipe.PFAdd(context.Background(), "hll_of_ips", ips) pipe.Exec(ctx) //redis-cli MEMORY usage set_of_ips fmt.Println("no. of unique views (SCARD) -", client.SCard(ctx, "set_of_ips").Val()) //redis-cli MEMORY usage hll_of_ips fmt.Println("no. of unique views (PFCOUNT) -", client.PFCount(ctx, "hll_of_ips").Val()) Chat application using pub/sub (below is a trimmed-down version of the code): package main import ( //omitted ) var client *redis.Client var Users map[string]*websocket.Conn var sub *redis.PubSub var upgrader = websocket.Upgrader{} const chatChannel = "chats" func init() { Users = map[string]*websocket.Conn{} } func main() { //...connect to redis (omitted) broadcast() http.HandleFunc("/chat/", chat) server := http.Server{Addr: ":8080", Handler: nil} //...start server (omitted) exit := make(chan os.Signal, 1) signal.Notify(exit, syscall.SIGTERM, syscall.SIGINT) <-exit ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() //... clean up all connected connections, unsubscribe and shutdown server (omitted) } func chat(w http.ResponseWriter, r *http.Request) { user := strings.TrimPrefix(r.URL.Path, "/chat/") upgrader.CheckOrigin = func(r *http.Request) bool { return true } // 1. create websocket connection c, err := upgrader.Upgrade(w, r, nil) if err != nil { log.Print("upgrade:", err) return } // 2. associate the user (name) with the actual connection Users[user] = c fmt.Println(user, "in chat") for { _, message, err := c.ReadMessage() if err != nil { //error handling and disconnect (omitted) } // 3. when a messages comes in via the connection, publish messages to redis channel client.Publish(context.Background(), chatChannel, user+":"+string(message)) } } func broadcast() { go func() { sub = client.Subscribe(context.Background(), chatChannel) messages := sub.Channel() for message := range messages { from := strings.Split(message.Payload, ":")[0] // 3. if a messages is received on the redis channel, broadcast it to all connected sessions (users) for user, peer := range Users { if from != user { peer.WriteMessage(websocket.TextMessage, []byte(message.Payload)) } } } }() } Tips and Tricks In this section, I covered some of the points from "Using Redis on Cloud? Here Are Ten Things You Should Know." These include: Connecting to Redis - common mistakes Scalability options - Vertical, Horizontal Using read-replicas Influence how your keys are distributed across a Redis cluster Execute bulk operations across Redis Cluster Sharded Pub/Sub Resources Finally, I wrapped up with some resources, including the Discord channel for the Go Redis community. Happy building!
Classloaders are an essential part of the Java Virtual Machine (JVM), but many developers consider them to be mysterious. This article aims to demystify the subject by providing a basic understanding of how class loading works in the JVM. What Are Classloaders In the Java Virtual Machine (JVM), classes are loaded dynamically and found through a process called class loading. Class loading is the process of loading a class from its binary representation (usually a .class file) into memory so that it can be executed by the JVM. This is where we need classloaders. Class loaders are used to load .class files into the memory. How Classes Are Loaded in JVM Classes are loaded in three steps: Creation and Loading step. The first thing that happens is loading a class file using a class loader. There are two kinds of class loaders: the bootstrap class loader supplied by the JVM and user-defined class loaders. (More details about class loaders are in the next chapter) Then, an instance of java.lang.Class class is created and makes the class available to the JVM for further execution. A detailed step-by-step algorithm can be found in the Java Virtual Machine specification. Linking step before the class is ready for execution. The JVM needs to perform a number of preparatory operations, which include verification and preparation of the class for execution. The linking steps are the following: Bytecode verification.Verification ensures that the binary representation of a class or interface is structurally correct and is not corrupted. Otherwise, the class file will not be linked, and a VerifyError error will be thrown. Verification can be turned off by the -noverify option. Turning off the verification can speed up the startup of the JVM, but disabling bytecode verification undermines Java’s safety and security guarantees. Why not disable bytecode verification? Preparation.Allocate RAM for static fields and initialize them with default values. Resolution of symbolic links.Since all references to fields, methods, and other classes are symbolic, JVM, in order to execute the class, you need to translate the references into internal representations. Initialization step. After a class is successfully loaded and linked, it can be initialized. At this stage, the static class initializers or static variable initializers are called, which ensures that the static initialization block is executed only once and static variables are initialized correctly. Also, it is worth remembering that Java implements delayed (or lazy) loading of classes. This means that class loading of reference fields of the loaded class will not be performed until the application explicitly refers to them. In other words, character reference resolution is optional and does not happen by default. Classloader Features Class loaders have three important features that are worth remembering: Delegation model: When requested to find a class or resource, a class loader will delegate the search for the class or resource to its parent class loader before attempting to find the class or resource itself. Visibility: Classes loaded by a parent class loader are visible to its child class loaders, but classes loaded by a child class loader are not visible to its parent class loaders or children. Uniqueness: In Java, a class is uniquely identified using ClassLoader + Class as the same class may be loaded by two different class loaders. Class A loaded by ClassLoader A != Class A loaded by ClassLoader B It is helpful for defining different protection and access policies for different classloaders. Classloaders Relationships We should remember that classes in Java are loaded on demand. That is, the class is loaded only when is requested to. As you know, the entry point of every program written in Java is public static void main(String[] args) method. The main method is the place where the very first class is loaded. All the subsequently loaded classes are loaded by the classes which are already loaded and running. When a class is requested by the running program, the System Class Loader searches for it in the application classpath. If the class is not found, the Platform Class Loader is searched, and if still not found, the Bootstrap Class Loader is searched. If the requested class is found in a parent class loader, it is loaded by that class loader. If not, the System Class Loader loads the class. If the class has not been loaded before, the class loader loads it into memory and creates a new instance of the class object that represents the loaded class. It is important to note that the class loading hierarchy is hierarchical in nature, with each class loader having a parent class loader. This parent-child relationship ensures that each class loader is responsible for loading only its own classes and delegates the loading of parent classes to its parent class loader. Different Types of Classloaders The class loading mechanist in JVM doesn’t use only one class loader. Every Java program has at least three class loaders: Bootstrap (Primordial) Class Loader: This is the root class loader and is responsible for loading core Java classes such as java.lang.Object and other classes in the Java standard library (also known as the Java Runtime Environment or JRE). It is implemented in native code and is part of the JVM itself. Although each class loader has its own ClassLoader object, there is no such object corresponding to the Bootstrap Class Loader. For example, if you would run this line of code String.class.getClassLoader(), you would get null. Extension Class Loader: This class loader is responsible for loading classes from the extension directories (such as the jre/lib/ext directory in the JRE installation) and is a child of Bootstrap Class Loader. You can also specify the locations of the extension directories via the java.ext.dirs system property. System (Application) Class Loader: This is the class loader that loads application-specific classes, usually from the classpath specified when running the Java application. The classpath can include directories, JAR files, and other resources. The classpath can be set using the CLASSPATH environment variable, the -classpath or -cp command-line option. The System/Application Class Loader is also implemented in Java and is a child of the Extension Class Loader. Classloaders and Related to Them Changes Over Time In the previous section, we’ve seen the class loaders hierarchy that was in Java until Java 9 revised that. The new class loaders hierarchy since Java 9 looks like this: Bootstrap (Primordial) Class Loader: This is the root class loader and is responsible for loading core Java classes such as java.lang.Object and other classes in the Java standard library (also known as the Java Runtime Environment or JRE). It is implemented in native code and is part of the JVM itself. Although each class loader has its own ClassLoader object, there is no such object corresponding to the Bootstrap Class Loader and, typically represented as null, and doesn’t have a parent. For example, if you would run this line of code String.class.getClassLoader(), you would get null. Platform Class Loader (Former Extension Class Loader): All classes in the Java SE Platform are guaranteed to be visible through the Platform Class loader. Just because a class is visible through the platform class loader does not mean the class is actually defined by the Platform Class Loader. Some classes in the Java SE Platform are defined by the Platform Class Loader, while others are defined by the Bootstrap Class Loader. Applications should not depend on which class loader defines which platform class. System (Application) Class Loader: This is the class loader that loads application-specific classes, usually from the classpath specified when running the Java application. The classpath can include directories, JAR files, and other resources. The classpath can be set using the CLASSPATH environment variable, the -classpath or -cp command-line option. The System/Application Class Loader is also implemented in Java and is a child of the Extension Class Loader. There are even more changes that were introduced in Java 9 which are related to class loaders, namely: The Application Class Loader is no longer an instance of URLClassLoader but rather an internal class. Now, there are ClassLoadersclass that contains in itself an implementation of three built-in class loaders. Such as: BootClassLoader PlatformClassLoader AppClassLoader However, the bootstrap class loader should be used via BootLoader class and not via ClassLoaders class. The Extension Class Loader has been renamed to the Platform Class Loader. The substantial difference between Java 8 Extension Class Loader and Java 9 Platform Class Loader is that the Platform Class Loader is no longer instance of URLClassLoader. But for the most part, the Platform Class Loader is the equivalent of what used to be known as the Extension Class Loader. One motivation for renaming it is that the extension mechanism has been removed, which we will discuss in the next paragraph. Removed Extension Mechanism: In releases before Java 9, the extension mechanism allowed the runtime environment to find and load extension classes without explicitly mentioning them on the classpath. However, in JDK 9, this mechanism has been removed. To use extension classes, ensure that their JAR files are included in the classpath. Removed rt.jar and tools.jar rt.jar contains all of the compiled class files for the base Java Runtime environment. tools.jar contains all tools that are needed by a JDK but not a JRE (javac, javadoc, javap). Big thanks to Erik Pragt for reviewing and guiding me.
With technological advancements, imagining has played a big role in online communication. Image galleries are the most used ways to showcase images and provide a better user experience. In this tutorial, we will take a walk-through on how to build a modern image gallery using HTML, CSS, and JavaScript. This tutorial will guide you on creating a grid layout, adding hovers to images, and filtering images by categories. By using these skills, you can create a visually appealing and functional image gallery. Whether you are a beginner or an experienced web developer, this tutorial will provide you with the knowledge and skills to create a modern image gallery using the latest web technologies. 1. Set up the HTML Structure Create a basic HTML structure for the image gallery with a container div and a grid div. Inside the grid div, create a div for each image with an img tag. HTML !DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta data-fr-http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>imageGallery</title> <link rel="stylesheet" href="style.css" /> </head> <body> <div class="gallery"> <div class="grid"> <div class="item"> <img src="beautiful-flower.jpg" /> </div> <div class="item"> <img src="beautiful-flowers.jpg" /> </div> <div class="item"> <img src="hill.jpg" /> </div> <!-- Add more image divs as needed --> </div> </div> </body> </html> 2. Style the Gallery With CSS Add CSS styles to create a responsive grid layout for the gallery, and add hover effects to the images. CSS .gallery { max-width: 1200px; margin: 0 auto; } .grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(250px, 1fr)); grid-gap: 20px; } .item { position: relative; overflow: hidden; height: 0; padding-bottom: 75%; cursor: pointer; } .item img { position: absolute; top: 0; left: 0; width: 100%; height: 100%; object-fit: cover; transition: transform 0.3s ease; } .item:hover img { transform: scale(1.2); } Output Let's Continue with the creation by adding more properties. 3. Add Filter Buttons With JavaScript Create filter buttons that allow users to filter the gallery by category. Use JavaScript to add event listeners to the buttons and filter the images based on their class. First, let us edit the HTML file to hold more images. HTML <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta data-fr-http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>imageGallery</title> <link rel="stylesheet" href="style.css" /> </head> <body> <h1 align="center">Simple Image Gallery</h1> <div class="gallery"> <div class="filters"> <button class="filter-btn" data-filter="all">All</button> <button class="filter-btn" data-filter="nature">Nature</button> <button class="filter-btn" data-filter="food">Food</button> </div> <div class="grid"> <div class="item"> <img src="beautiful-flower.jpg" /> </div> <div class="item"> <img src="beautiful-flowers.jpg" /> </div> <div class="item"> <img src="hill.jpg" /> </div> <!-- Add more image divs as needed --> </div> </div> <div class="gallery" style="margin-top: 40px;"> <div class="grid"> <div class="item nature"> <img src="beautiful.jpg" /> </div> <div class="item food"> <img src="food2.jpg" /> </div> <div class="item nature"> <img src="hill.jpg" /> </div> <!-- Add more image divs as needed --> </div> </div> <script src="js.js"></script> </body> </html> Link the JS file as shown and add the code below to your Javascript file. On my end, it saves as js.js. Use JavaScript to add event listeners to the buttons and filter the images based on their class. JavaScript const filterBtns = document.querySelectorAll('.filter-btn'); const gridItems = document.querySelectorAll('.item'); filterBtns.forEach(btn => { btn.addEventListener('click', () => { const filter = btn.dataset.filter; filterItems(filter); setActiveFilterBtn(btn); }); }); function filterItems(filter) { gridItems.forEach(item => { if (filter === 'all' || item.classList.contains(filter)) { item.style.display = 'block'; } else { item.style.display = 'none'; } }); } function setActiveFilterBtn(activeBtn) { filterBtns.forEach(btn => { btn.classList.remove('active'); }); activeBtn.classList.add('active'); } 4. Customize the Gallery as Needed You can customize the gallery by adding more filters, changing the CSS styles, or adding more functionality with JavaScript. Conclusion By following the steps outlined in this tutorial, you can create a responsive and user-friendly gallery that is both visually appealing and functional. With the use of modern web technologies such as CSS Grid and JavaScript, you can create an image gallery that is both easy to maintain and scalable for future enhancements. We hope this tutorial has been helpful in guiding you through the process of building a modern image gallery and that you are now equipped with the skills to create your own unique gallery. Keep practicing and exploring the latest web technologies, and you'll be amazed at what you can create! Happy coding!
Are you exhausted from drowning in an overwhelming flood of print statements while debugging your Python code? Longing for a superior solution to effortlessly identify and rectify common Python errors? Your search ends here with Pdb, the Python debugger that streamlines issue resolution with unparalleled ease. In the ever-growing realm of Python, developers seek dependable tools for swift and efficient code debugging. Enter Pdb, a powerful solution enabling step-by-step code traversal, variable inspection, and strategic breakpoints. With its streamlined interface, Pdb is an indispensable companion for Python developers striving to debug like seasoned pros. Join us as we explore the depths of Pdb's capabilities and unleash your debugging prowess! Setting Up Pdb Using Pdb begins with the installation process. Luckily, Pdb is already integrated into the Python standard library, eliminating the need for separate installation. Nonetheless, for those using older Python versions, manual installation might be necessary. Running Pdb From the Command Line After installing Pdb, unleash its power from the command line by adding this single line of code to your Python file: Python import pdb; pdb.set_trace() This will start the Pdb debugger and pause your code at that point. You can then use various Pdb commands to inspect and modify your code as needed. Running Pdb in Your Code For a more streamlined debugging experience, consider using the Pdb module directly in your code. This allows you to debug without the need for frequent code modifications. Simply import the Pdb module and invoke the set_trace() method at the desired debugging starting point. For instance: Python import pdb def my_function(): pdb.set_trace() # rest of your code here This will start the Pdb debugger at the point where you call the 'set_trace()' method, allowing you to step through your code and identify any errors. Common Python Errors Let's kick things off with syntax errors. These pesky mistakes arise from typos or incorrect keyword usage in your code. Picture this: you write "pritn("Hello, World!")" instead of "print("Hello, World!")" in Python, and boom, a syntax error is thrown. But fear not! Using Pdb, you can identify and resolve syntax errors by stepping through your code until you locate the problematic line. Once found, simply make the necessary edits and resume running your code. Problem solved! Moving forward, let's address name errors—those pesky issues that arise when you attempt to utilize an undefined variable or function. Imagine writing "print(x)" without a prior definition of variable x in Python, resulting in a name error. To resolve such errors using Pdb, execute your code with Pdb and examine the existing variables and functions at the error's occurrence. Once you locate the undefined variable or function, define it and proceed with running your code smoothly. Third, we have type errors. These errors occur when you try to use a variable or function in a way that is not compatible with its data type. For example, if you tried to add an integer and a string together with "1" + 2, Python would throw a type error. To use Pdb to find and fix type errors, simply run your code with Pdb and inspect the data types of the variables and functions that are being used incorrectly. Once you find the incompatible data type, you can correct it and continue running your code. Index errors can occur when attempting to access an index that doesn't exist within a list or string. For instance, if you try to access the third item in a two-item list, Python will raise an index error. To identify and resolve these index errors using Pdb, execute your code with Pdb and examine the accessed indices. Once the out-of-bounds index is identified, make the necessary correction to proceed with running your code. Enter the world of key errors, the elusive bugs that arise when attempting to access nonexistent keys in a dictionary. Picture this: you're digging into a dictionary without defining the key first, and boom! Python throws a key error at you. Fear not, for Pdb is here to save the day. By running your code with Pdb and examining the keys in question, you'll uncover the undefined key culprit. Define it, and voila! Your code can resume its smooth operation. Advanced Pdb Techniques Pdb has several advanced techniques that can make debugging even easier and more effective. Stepping Through Code Pdb's standout feature is its line-by-line code stepping capability, enabling you to precisely track execution and swiftly identify errors. Use "s" to step into functions, "n" to execute the next line, and "c" to continue until breakpoints or code end. Setting Breakpoints One powerful technique in Pdb is the use of breakpoints. These breakpoints pause code execution at specific points, allowing for program state inspection. To set a breakpoint in Pdb, simply use the "b" command followed by the line number or function name. Conditional breakpoints are also possible by specifying a condition in parentheses after the "b" command. Inspecting Variables Unraveling the mysteries of your code is made simpler by leveraging the power of Pdb. With the "p" command, you can effortlessly examine variable values at different program junctures. Moreover, the "pp" command comes in handy for beautifully displaying intricate objects such as dictionaries and lists. Changing Variables In the midst of debugging your code, there might be instances where you wish to alter the value of a variable to observe its impact on the program's behavior. Again, Pdb comes to the rescue, enabling you to accomplish this through the "set" command, specifying the variable name and the desired new value. For instance, executing "set y = 29" would modify the value of "y" to 29. Continuing Execution Once you've pinpointed and resolved a coding error, it's crucial to proceed with execution to uncover any subsequent issues. Pdb simplifies this process through its "c" command, seamlessly resuming execution until the next breakpoint or the code's conclusion. Best Practices Here are some of the best practices you should keep in mind: Don't Overuse Pdb Debugging with Pdb can be tempting, but overusing it is a common mistake. Although it's a powerful tool, relying on it for every small issue can result in cluttered code that's difficult to read and understand. Instead, save Pdb for when it's truly necessary and consider using simpler debugging techniques, such as print statements for simpler issues. Document Your Debugging Process In the realm of code debugging, it's common to lose track of attempted solutions and acquired knowledge. That's why documenting your debugging process is crucial. Maintain a comprehensive log of encountered issues, attempted solutions, and observed outcomes. This log will facilitate picking up where you left off after stepping away from the code and enable seamless sharing of your findings with others if needed. Clean Up Your Code After Debugging After successfully debugging your code, ensure to tidy it up by removing any added Pdb statements or debugging code. This practice not only enhances code readability and comprehension but also prevents the inadvertent inclusion of debugging code in your production codebase. Use Pdb in Conjunction With Other Debugging Tools While Pdb is indeed a powerful tool, it should not be your sole debugging solution. Instead, unlock the full potential by integrating other effective techniques such as log files, unit tests, and code reviews. By combining Pdb with these supplementary tools, you'll gain a comprehensive understanding of your code's inner workings. Conclusion Pdb: The ultimate time-saving and sanity-preserving tool for Python developers. Say goodbye to hours of head-scratching by mastering Pdb's powerful debugging capabilities. But remember, use it wisely alongside other tools, document your process, clean up your code, and avoid excessive reliance. Unleash the power of Pdb today and witness the transformative impact it has on your debugging process. Experience unparalleled efficiency and effectiveness as you conquer Python errors effortlessly. With Pdb as your ally, debug like a true professional.
As a software developer with years of experience working primarily with Java, I found myself intrigued when I recently switched to Python for a new project. The transition prompted me to explore the world of asynchronous programming in various languages, including Java, Python, JavaScript, and Golang. This article is a result of my exploration and personal experience with these languages, aiming to provide insight into asynchronous programming techniques and examples. Asynchronous Programming in Java When I first started programming in Java, I quickly became familiar with the concept of threads. Over time, I found that the Executor framework and CompletableFuture class offered more powerful and flexible ways to handle asynchronous operations. For example, I used the Executor framework to build a web scraper that fetched data from multiple websites concurrently. By using a fixed thread pool, I was able to limit the number of simultaneous connections while efficiently managing resources: Java ExecutorService executor = Executors.newFixedThreadPool(10); for (String url : urls) { executor.submit(() -> { // Fetch data from the URL and process it }); } executor.shutdown(); executor.awaitTermination(Long.MAX_VALUE, TimeUnit.MILLISECONDS); Asynchronous Programming in Python Switching to Python, I was initially challenged by the different approaches to asynchronous programming. However, after learning about the asyncio library and the async/await syntax, I found it to be a powerful and elegant solution. I once implemented a Python-based microservice that needed to make multiple API calls. By leveraging asyncio and async/await, I was able to execute these calls concurrently and significantly reduce the overall response time: Python import aiohttp import asyncio async def fetch(url): async with aiohttp.ClientSession() as session: async with session.get(url) as response: return await response.text() async def main(): urls = [...] # List of URLs tasks = [fetch(url) for url in urls] responses = await asyncio.gather(*tasks) asyncio.run(main()) Asynchronous Programming in JavaScript When working with JavaScript, I appreciated its innate support for asynchronous programming. As a result, I have used callbacks, promises, and async/await extensively in various web applications. For example, I once built a Node.js application that required data from multiple RESTful APIs. By using promises and async/await, I was able to simplify the code and handle errors more gracefully: JavaScript const axios = require("axios"); async function fetchData(urls) { const promises = urls.map(url => axios.get(url)); const results = await Promise.all(promises); // Process the results } const urls = [...] // List of URLs fetchData(urls); Asynchronous Programming in Golang During my exploration of Golang, I was fascinated by its native support for concurrency and asynchronous programming, thanks to goroutines and channels. For example, while working on a project that required real-time processing of data from multiple sources, I utilized goroutines and channels to manage resources effectively and synchronize the flow of data: Go package main import ( "fmt" "net/http" "io/ioutil" ) func processSource(url string, ch chan<- string) { resp, err := http.Get(url) if err != nil { ch <- fmt.Sprintf("Error fetching data from %s: %v", url, err) return } defer resp.Body.Close() body, err := ioutil.ReadAll(resp.Body) // Process the fetched data ch <- fmt.Sprintf("Processed data from %s", url) } func main() { sources := [...] // List of data sources ch := make(chan string, len(sources)) for _, url := range sources { go processSource(url, ch) } for range sources { fmt.Println(<-ch) } } Conclusion Asynchronous programming is a crucial aspect of modern application development, and having a deep understanding of its implementation across various languages is invaluable. My experiences with Java, Python, JavaScript, and Golang have taught me that each language has its unique and powerful features for managing asynchronous tasks. By sharing these experiences and examples, I aim to encourage others to embrace asynchrony in their projects, ultimately leading to more efficient and responsive applications.
Managing concurrent access to shared data can be a challenge, but by using the right locking strategy, you can ensure that your applications run smoothly and avoid conflicts that could lead to data corruption or inconsistent results. In this article, we'll explore how to implement pessimistic and optimistic locking using Kotlin, Ktor, and jOOQ, and provide practical examples to help you understand when to use each approach. Whether you are a beginner or an experienced developer, the idea is to walk away with insights into the principles of concurrency control and how to apply them in practice. Data Model Let's say we have a table called users in our MySQL database with the following schema: SQL CREATE TABLE users ( id INT NOT NULL AUTO_INCREMENT, name VARCHAR(255) NOT NULL, age INT NOT NULL, PRIMARY KEY (id) ); Pessimistic Locking We want to implement pessimistic locking when updating a user's age, which means we want to lock the row for that user when we read it from the database and hold the lock until we finish the update. This ensures that no other transaction can update the same row while we're working on it. First, we need to ask jOOQ to use pessimistic locking when querying the users table. We can do this by setting the forUpdate() flag on the SELECT query: Kotlin val user = dslContext.selectFrom(USERS) .where(USERS.ID.eq(id)) .forUpdate() .fetchOne() This will lock the row for the user with the specified ID when we execute the query. Next, we can update the user's age and commit the transaction: Kotlin dslContext.update(USERS) .set(USERS.AGE, newAge) .where(USERS.ID.eq(id)) .execute() transaction.commit() Note that we need to perform the update within the same transaction that we used to read the user's row and lock it. This ensures that the lock is released when the transaction is committed. You can see how this is done in the next section. Ktor Endpoint Finally, here's an example Ktor endpoint that demonstrates how to use this code to update a user's age: Kotlin post("/users/{id}/age") { val id = call.parameters["id"]?.toInt() ?: throw BadRequestException("Invalid ID") val newAge = call.receive<Int>() dslContext.transaction { transaction -> val user = dslContext.selectFrom(USERS) .where(USERS.ID.eq(id)) .forUpdate() .fetchOne() if (user == null) { throw NotFoundException("User not found") } user.age = newAge dslContext.update(USERS) .set(USERS.AGE, newAge) .where(USERS.ID.eq(id)) .execute() transaction.commit() } call.respond(HttpStatusCode.OK) } As you can see, we first read the user's row and lock it using jOOQ's forUpdate() method. Then we check if the user exists, update their age, and commit the transaction. Finally, we respond with an HTTP 200 OK status code to indicate success. Optimistic Version Optimistic locking is a technique where we don't lock the row when we read it, but instead, add a version number to the row and check it when we update it. If the version number has changed since we read the row, it means that someone else has updated it in the meantime, and we need to retry the operation with the updated row. To implement optimistic locking, we need to add a version column to our users table: SQL CREATE TABLE users ( id INT NOT NULL AUTO_INCREMENT, name VARCHAR(255) NOT NULL, age INT NOT NULL, version INT NOT NULL DEFAULT 0, PRIMARY KEY (id) ); We'll use the version column to track the version of each row. Now, let's update our Ktor endpoint to use optimistic locking. First, we'll read the user's row and check its version: Kotlin post("/users/{id}/age") { val id = call.parameters["id"]?.toInt() ?: throw BadRequestException("Invalid ID") val newAge = call.receive<Int>() var updated = false while (!updated) { val user = dslContext.selectFrom(USERS) .where(USERS.ID.eq(id)) .fetchOne() if (user == null) { throw NotFoundException("User not found") } val oldVersion = user.version user.age = newAge user.version += 1 val rowsUpdated = dslContext.update(USERS) .set(USERS.AGE, newAge) .set(USERS.VERSION, user.version) .where(USERS.ID.eq(id)) .and(USERS.VERSION.eq(oldVersion)) .execute() if (rowsUpdated == 1) { updated = true } } call.respond(HttpStatusCode.OK) } In this example, we use a while loop to retry the update until we successfully update the row with the correct version number. First, we read the user's row and get its current version number. Then we update the user's age and increment the version number. Finally, we execute the update query and check how many rows were updated. If the update succeeded (i.e., one row was updated), we set updated to true and exit the loop. If the update failed (i.e., no rows were updated because the version number had changed), we repeat the loop and try again. Note that we use the and(USERS.VERSION.eq(oldVersion)) condition in the WHERE clause to ensure that we only update the row if its version number is still the same as the one we read earlier. Trade-Offs Optimistic and pessimistic locking are two essential techniques used in concurrency control to ensure data consistency and correctness in multi-user environments. Pessimistic locking prevents other users from accessing a record while it is being modified, while optimistic locking allows multiple users to access and modify data concurrently. A bank application that handles money transfers between accounts is a good example of a scenario where pessimistic locking is a better choice. In this scenario, when a user initiates a transfer, the system should ensure that the funds in the account are available and that no other user is modifying the same account's balance concurrently. In this case, it is critical to prevent any other user from accessing the account while the transaction is in progress. The application can use pessimistic locking to ensure exclusive access to the account during the transfer process, preventing any concurrent updates and ensuring data consistency. An online shopping application that manages product inventory is an example of a scenario where optimistic locking is a better choice. In this scenario, multiple users can access the same product page and make purchases concurrently. When a user adds a product to the cart and proceeds to checkout, the system should ensure that the product's availability is up to date and that no other user has purchased the same product. It is not necessary to lock the product record as the system can handle conflicts during the checkout process. The application can use optimistic locking, allowing concurrent access to the product record and resolving conflicts during the transaction by checking the product's availability and updating the inventory accordingly. Conclusion When designing and implementing database systems, it's important to be aware of the benefits and limitations of both pessimistic and optimistic locking strategies. While pessimistic locking is a reliable way to ensure data consistency, it can lead to decreased performance and scalability. On the other hand, optimistic locking provides better performance and scalability, but it requires careful consideration of concurrency issues and error handling. Ultimately, choosing the right locking strategy depends on the specific use case and trade-offs between data consistency and performance. Awareness of both locking strategies is essential for good decision-making and for building robust and reliable backend systems.
In this series of simulating and troubleshooting performance problems in Scala, let’s discuss how to simulate thread leaks. java.lang.OutOfMemoryError: unable to create new native thread will be thrown when more threads are created than the memory capacity of the device. When this error is thrown, it will disrupt the application’s availability. Video: To see the visual walk-through of this post, click below: Scala Sample Thread Leak Program Here is a sample Scala program, which will generate java.lang.OutOfMemoryError: unable to create new native thread Scala package com.yc import java.lang.Thread.sleep class ThreadLeakApp { } object ThreadLeakApp { def main(args: Array[String]): Unit = { System.out.println("ThreadApp started") while (true) { new ForeverThread().start() } } class ForeverThread extends Thread { override def run(): Unit = { while (true) { sleep(100) } } } } You can notice that the sample program contains the ThreadLeakApp class. This class has a start() method. In this method, ForeverThread is created an infinite number of times because of the while (true) loop. In the ForeverThread class there is the run() method. In this method, thread is put to continuous sleep, i.e., thread is repeatedly sleeping for 100 milliseconds again and again. This will keep the ForeverThread alive always without doing any activity. A thread will die only if it exits the run() method. In this sample program run() method will never exit because of the never-ending sleep. Since the ThreadLeakApp class keeps creating ForeverThread infinitely and it never terminates, very soon several thousands of ‘ForeverThread’ will be created. It will saturate memory capacity, ultimately resulting in java.lang.OutOfMemoryError: unable to create new native thread problem. How To Diagnose java.lang.OutOfMemoryError: unable to create new native thread? You can diagnose the OutOfMemoryError: unable to create new native thread problem either through a manual or automated approach. Manual Approach In the manual approach, you need to capture thread dumps as the first step. A thread dump shows all the threads that are in memory and their code execution path. You can capture a thread dump using one of the 8 options mentioned here. But an important criterion is: You need to capture thread dump right when the problem is happening (which might be tricky to do). Once the thread dump is captured, you need to manually import the thread dump from your production servers to your local machine and analyze it using thread dump analysis tools like fastThread, and samurai. Automated Approach On the other hand, you can also use yCrash open source script, which would capture 360-degree data (GC log, 3 snapshots of thread dump, heap dump, netstat, iostat, vmstat, top, top -H,…) right when the problem surfaces in the application stack and analyze them instantly to generate root cause analysis report. We used the automated approach. Below is the root cause analysis report generated by the yCrash tool highlighting the source of the problem. From the report, you can notice that yCrash points out that 2608 threads are in TIMED_WAITING state, and they have the potential to cause OutOfMemoryError: unable to create new native thread problem. Besides the thread count, the tool is also reporting the line of code, i.e., com.yc.ThreadLeakApp$ForeverThread.run(ThreadLeakApp.scala:31) in which all the 2608 threads are stuck. Equipped with this information, one can quickly go ahead and fix the java.lang.OutOfMemoryError: unable to create new native thread problem.
Javin Paul
Lead Developer,
infotech
Reza Rahman
Principal Program Manager, Java on Azure,
Microsoft
Kai Wähner
Technology Evangelist,
Confluent
Alvin Lee
Founder,
Out of the Box Development, LLC