Containers Trend Report. Explore the current state of containers, containerization strategies, and modernizing architecture.
Securing Your Software Supply Chain with JFrog and Azure. Leave with a roadmap for keeping your company and customers safe.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
Several of us might be familiar with the clear () API in the Java collections framework. In this post, let’s discuss what the purpose of this clear() API is. What is the performance impact of using this API? What happens under the JVM when this API is invoked? Video: To see the visual walk-through of this post, click below: What Does Clear() API Do? clear() API is present in the Java Collection interface. It’s implemented by all the concrete classes that implement the Collection interface: ArrayList, TreeSet, and Stack…. When this method is invoked, it removes all the elements that are present in the data structure. How Does ArrayList’s Clear() Method Work in Java? In this post, let’s focus on the ArrayList’s implementation of the clear() method. Other data structures implementation is also quite similar. ArrayList underlying has an Object array, i.e., Object[] as a member variable, when you add records to the ArrayList, they are added to this Object[]. When you invoke the clear() API on the ArrayList, all the objects (i.e., contents) of this Object[] will be removed. Let’s say we created an ArrayList and added a list of integers 0 to 1,000,000 (1 million). When the clear() method is invoked on it, all the 1 million integers from the underlying Object[] will be removed. However, the empty Object[] with size of 1 million will continue to remain, consuming memory unnecessarily. Creating ArrayList Example It’s always easy to learn with an example. Let’s learn the clear()API functionality with this simple example: Java 01: public class ClearNoDemo { 02: 03: private static ArrayList<Long> myList = new ArrayList<>(); 04: 05: public static void main(String[] args) throws Exception { 06: 07: for (int counter = 0; counter < 1_000_000; ++counter) { 08: 09: myList.add(Long.valueOf(counter)); 10: } 11: 12: System.out.println("All records added!"); 13: 14: Thread.sleep(100000); // sleep for 10 seconds 15: } 16: } Here are the operations we are performing in this ClearNoDemo class: a. We are creating a myListobject whose type is ArrayList in line #3. b. We are adding 0 to 1 million Long wrapper objects to this myList from line #07 – #10. d. In line# 14, we are putting the thread to sleep for 10 seconds to capture the heap dump for our discussions. We ran this program and captured the heap dump from the program using the open-source yCrash script when the program was sleeping in line# 14. We captured the heap dump so that we can study how objects are stored in the memory. A heap dump is basically a binary file, which contains information such as: what are the objects that are residing in the memory, what is their size, who is referencing them, and what are the values that are present in them. Since a heap dump is a binary file in an unreadable format, we analyzed the heap dump using the heap dump analysis tool — HeapHero. The report generated by the tool can be found here. Below is the Dominator Tree section from the report that displays the largest objects in the application: Fig: ArrayList without invoking clear() API (heap report by HeapHero) You can notice our myList object is reported as the largest object because we created 1 million Long objects and stored them in it. You can notice that the myList object has a child object elementData whose type is the Object[]. This is the actual Object[] where 1 million+ records are stored. Also, you can notice that this Object[] occupies 27.5mb of memory. This analysis confirms that the objects that we are adding are stored in the internal Object[]. List#clear() API Example Now, we have created a slightly modified version of the above program, where we are invoking the clear() API on the ArrayList. Java 01: public class ClearDemo { 02: 03: private static ArrayList<Long> myList = new ArrayList<>(); 04: 05: public static void main(String[] args) throws Exception { 06: 07: for (int counter = 0; counter < 1_000_000; ++counter) { 08: 09: myList.add(Long.valueOf(counter)); 10: } 11: 12: long startTime = System.currentTimeMillis(); 13: myList.clear(); 14: System.out.println("Execution Time: " + (System.currentTimeMillis() - startTime)); 15: 16: Thread.sleep(100000); // sleep for 10 seconds 17: } 18: } Here are the operations we are performing in this ClearDemo class: a. We are creating a myList object whose type is ArrayList in line #3. b. We are adding 0 to 1 million Long wrapper objects to this myList from line #07 – #10. c. We are removing the objects from the myList on line #13 using the clear() API. d. In line# 16, we are putting the thread to sleep for 10 seconds, to capture the heap dump for our discussions. When you invoke clear() API, all the 1 million Long objects that were stored in the Object[] will be removed from the memory. However, Object[] itself will continue to remain in the memory. To confirm this theory, we ran the above program and captured the heap dump using the open-source yCrash script when the program was sleeping in line# 16. We analyzed the heap dump using the heap dump analysis tool — HeapHero. The report generated by the tool can be found here. Below is the Dominator Tree section from the report that displays the largest objects in the application: Fig: ArrayList after invoking clear() API (heap report by HeapHero) You can notice our myList object is reported as the largest object. You can notice that the ‘myList’ object has a child object elementData whose type is the Object[]. However, this Object[] has 0 entries (i.e., no elements in it), but it has an array size of 1 million+. Since this empty array with 1 million+ size is present, it occupies 4.64mb of memory. This analysis confirms that even though objects are removed by invoking clear() API, still underlying Object[] with 1 million+ size, will continue to exist, consuming memory unnecessarily. Note: Refer to the Memory Impact section below to learn what kind of performance impact your application will experience when invoking clear() API. Assigning List to Null Example To make our study further interesting, we created a slightly modified version of the above program where we were assigned the myList to null reference instead of invoking clear() API to remove the objects from the ArrayList. Java 01: public class ClearNullDemo { 02: 03: private static ArrayList<Long> myList = new ArrayList<>(); 04: 05: public static void main(String[] args) throws Exception { 06: 07: for (int counter = 0; counter < 1_000_000; ++counter) { 08: 09: myList.add(Long.valueOf(counter)); 10: } 11: 12: long startTime = System.currentTimeMillis(); 13: myList = null; 14: System.out.println("Execution Time: " + (System.currentTimeMillis() - startTime)); 15: 16: Thread.sleep(100000); // sleep for 10 seconds 17: } 18: } Here are the operations we are performing in this ClearNullDemo class: a. We are creating a myList object whose type is ArrayList in line #3. b. We are adding 0 to 1 million Long wrapper objects to this myList from line #07 – #10. c. We are assigning the list to null in line# 13 instead of using the clear() API. d. In line# 16, we are putting the thread to sleep for 10 seconds to capture the heap dump for our discussions. When you are assigning null to myList, it will make the ArrayList and underlying Object[] eligible for garbage collection. They will no longer exist in memory. To confirm this theory, we ran the above program and captured the heap dump using the open-source yCrash script when the program was sleeping in line# 16. We analyzed the heap dump using the heap dump analysis tool — HeapHero. The report generated by the tool can be found here. Below is the Dominator Tree section from the report that displays the largest objects in the application: You can notice our myList object reported is not even present in the list (as it got garbage collected from the memory). This is in total contrast to the earlier two example programs. Memory Impact Fig: Memory occupied by ArrayList The above chart shows the memory occupied by the ArrayList. a. When ArrayList was created 1 million Long records, it occupies 27.5MB b. When clear() API was invoked, it continues to occupy 4.64MB because the underlying empty Object[] will continue to remain in memory. c. On the other hand, when assigned to null, ArrayList gets garbage collected and doesn’t occupy any memory. Thus, from the memory perspective, it’s a prudent decision to assign the ArrayList to null instead of invoking the clear() API. Processing Time Impact Java 01: public void clear() { 02: modCount++; 03: final Object[] es = elementData; 04: for (int to = size, i = size = 0; i < to; i++) 05: es[i] = null; 06: } Fig: Java source code of ArrayList#clear() Above is the source code of the clear() method from the JDK. From the source code (i.e., line #4 and #5) — you can notice this method loops through all the elements in the underlying Object[] assigns them to null value. This is a time-consuming process, especially on a collection that has a lot of elements, like our example of 1 million elements. In such circumstances, assigning the ArrayList variable to null would be more performant. When To Use Collection#Clear() API? This raises the question of whether we should never invoke clear() API because of its memory and processing impact. Although I would vote for this option, there might be scenarios in which clear() API might have its case: a. Passing by reference: If you are passing a Collection object as a reference to other parts of the code, then assigning null value, will result in the famous NullPointerException. To avoid that exception, you may use the clear() API. b. Collection size is small: If you are creating only a few collection instances and their size is very small (say has only 10, 20 elements), then invocation of clear() API or assigning null might not make much difference. Conclusion I hope in this post, we learned about clear() API and its performance impacts in detail.
This is the third and final post about my OCP-17 preparation. In part one, I explained how playing a human virtual machine and refreshing your mastery of arcane constructs is not pointless, even if the OCP doesn’t — and doesn’t claim to — make you a competent developer. In the second part, I showed you how intrinsic motivation keeps itself going without carrots or sticks, provided you can find ways to make your practice fun and memorable. It's time to share some of these examples and tips. Make It Quality Time But first some advice about logistics and time management. As with physical exercise, short and frequent trumps long and sporadic. It’s more effective and more likely to become a habit, like brushing your teeth. Choose the time of day when you are most energetic and productive. The early morning works best for me because I’m a morning person. And there is a satisfaction in getting the daily dose crossed off your to-do list, even when it doesn’t feel like a chore. Make a good balance between reading, practicing, and revising. Once you’ve worked through the entire textbook you will need to refresh much of the first few chapters. That’s okay. Keep revising them, doing a few questions from each chapter each day. You’ll get there slowly, but surely. Make It Practical and Productive Practice in the previous paragraph means writing novel code aimed to teach yourself a certain language construct. It’s about your productivity, so copying snippets from the book doesn’t count. If you’ve ever learned a foreign language the old-fashioned way, you will agree that cramming vocabulary and grammar rules does little for your oral skills. Only speaking can make you fluent, preferably among native speakers. It’s like swimming or playing the saxophone: you can’t learn it from a book. Never used the NIO2 API or primitive streams? Never done a comparison or binary search of arrays? Get your feet wet, preferably with autocomplete turned off. Better even, scribble in a plaintext editor and paste it into your IDE when you’re done. Understand the Why While Java shows its age, its evolution is managed carefully so new additions don’t feel as if they were haphazardly tacked on. Decisions take long to mature and were made for a reason. When the book doesn’t explain the reasoning behind a certain API peculiarity, try to explain it to yourself instead of parroting a rule. Here’s a case in point from the concurrency API. The submit() method of an executor has two overloaded versions for a Runnable or Callable argument. It returns a Future. The void execute() method only takes a Runnable, not a Callable. Why does that make good sense? Well, a Callable yields a value and can throw an Exception. Since execute() acts in a fire-and-forget fashion, the result of a Callable would be inaccessible, so it’s not supported. Conversely, submitting a Runnable with a void result is fine. Its Future returns null. The memory athletes from my previous post, who memorized random stacks of cards, have it much tougher than you and I. Learning Java is about memorizing a lot of facts, but they’re not random. Making a Visual Story The ancient Greeks taught us how to construct mental memory palaces to store random facts for easy retrieval. Joshua Foer added a moonwalking Albert Einstein to jog his memory. You should make your code samples equally fun and memorable. Here’s how to illustrate the fundamental differences between an ArrayList and a LinkedList. Imagine a movie theater with a fixed number of seats (the ArrayList) and a line of patrons (the LinkedList) at the ticket booth, who receive a numbered ticket. People arrive (offer(..) or add(..)) at the tail of the queue irregularly while every ten seconds the first person in the queue can enter the theater (poll(), element()) and is shown to their seat (seats.set(number, patron). Let’s add concurrency to the mix. Suppose there are two ticket booths, each with its own line, and a central ticket dispenser that increments a number. That’s right: getAndIncrement() in AtomicInteger to the rescue. I’d happily show you the code, but that wouldn’t teach you much. Or take access rights in class hierarchies. Subtypes may not impose stricter access rights or declare broader or new checked exceptions. Let’s put it less academically. Imagine a high rise with multiple company offices (classes) and several floors (packages). Private access is limited to employees of one company. Package access extends to offices on the same floor. Public access is everybody: other floors as well as external visitors. The proprietor provides a public bathroom that clearly shows when it’s occupied. You can dress it up with scented towels and music through a subclass, but you must obey this contract: public void visitRestRoom(Person p) throws OccupiedException { .. } Every outside visitor is welcome to use it. You are not allowed to restrict access to only employees on your floor (package access), much less your own employees (private access). Neither may you bother visitors with a PaymentExpectedException. It violates the contract. Code samples in the exam are meant to confuse you. Your own examples should do the exact opposite. You use real-life examples (a public office restroom, the queue outside a movie theater) and combine them in a way that is easy to visualize and fun to remember. Mnemonics Sometimes there’s nothing for it but to commit stuff to memory, like the types you can use as a switch variable (byte, int, char, short, String, enum, var). You can string them together in a mnemonic like this one: In one short intense soundbite, the stringy character enumerated the seven variables for switch. Or how about the methods that operate on the front of a queue (element, push, peek, pop, poll, and remove)? Elmer pushed to the front of the queue to get a peek at the pop star, but he was pulled out and removed. Yes, it’s far-fetched, silly, and outlandish. That’s what makes them memorable. To me at least. Or try your hand at light verse. The educational benefit may not be as strong for you as a reader, but the time I spent crafting it made sure I won’t quickly confuse Comparable and Comparator again. The Incomparable Sonnet You implement Comparable to sort(in java.lang: no reason to import).CompareTo runs through items with the aimto see if they are different or the same. If it returns a positive, it meantthat this was greater than the argument.For smaller ones a minus is supplied,a zero means the same, or “can’t decide”. Comparator looks similar, but bearin mind its logic is more self-contained.It has a single method called comparewhere difference of two args is ascertained.A range of default methods supplementyour lambdas. Chain them to your heart’s content! Some Closing Thoughts The aim of your practice is not to pass the exam as quickly as possible (or at all). It’s to become a more competent developer and have fun studying. I mentioned that there is some merit in playing human compiler, but that doesn’t mean that I fully agree with the OCP’s line of questioning in its current form and the emphasis on API details. Being able to write code from scratch with only your limited memory to save you is not a must-have skill for a developer in the coming decade. She will need to acquire new skills to counter the relentless progress of AI in the field. If I needed to assess you as a new joiner to our team, and you showed me a 90% OCP passing grade, I’d be seriously impressed and a little jealous, but I will still not be convinced that you’re a great developer until I see some of your work. You could still be terrible. And you can be a competent programmer and fail the exam. That’s where the OCP is so different from, say, a driving test. If you’re a bad driver you should not get a license, no exceptions. And if you fail the test, you’re not a great driver. Full disclosure: it took me four tries. If the original C language was a portable toolbox and Java 1.1 a toolshed, then Java 17 SE is a warehouse with many advanced power tools. The great thing is that you don’t have to wonder what all the buttons do. The instructions are clearly printed on the tools themselves through autocomplete and Javadoc. It makes sense to know what tools the warehouse stocks and when you should use them. But learning the instructions by heart? I can think of a better use of my time, energy, and memory.
In this post, we'll delve into the fascinating world of operator overloading in Java. Although Java doesn't natively support operator overloading, we'll discover how Manifold can extend Java with that functionality. We'll explore its benefits, limitations, and use cases, particularly in scientific and mathematical code. We will also explore three powerful features provided by Manifold that enhance the default Java-type safety while enabling impressive programming techniques. We'll discuss unit expressions, type-safe reflection coding, and fixing methods like equals during compilation. Additionally, we'll touch upon a solution that Manifold offers to address some limitations of the var keyword. Let's dive in! Before we begin, as always, you can find the code examples for this post and other videos in this series on my GitHub page. Be sure to check out the project, give it a star, and follow me on GitHub to stay updated! Arithmetic Operators Operator overloading allows us to use familiar mathematical notation in code, making it more expressive and intuitive. While Java doesn't support operator overloading by default, Manifold provides a solution to this limitation. To demonstrate, let's start with a simple Vector class that performs vector arithmetic operations. In standard Java code, we define variables, accept them in the constructor, and implement methods like plus for vector addition. However, this approach can be verbose and less readable. Java public class Vec { private float x, y, z; public Vec(float x, float y, float z) { this.x = x; this.y = y; this.z = z; } public Vec plus(Vec other) { return new Vec(x + other.x, y + other.y, z + other.z); } } With Manifold, we can simplify the code significantly. Using Manifold's operator overloading features, we can directly add vectors together using the + operator as such: Java Vec vec1 = new Vec(1, 2, 3); Vec vec2 = new Vec(1, 1, 1); Vec vec3 = vec1 + vec2; Manifold seamlessly maps the operator to the appropriate method invocation, making the code cleaner and more concise. This fluid syntax resembles mathematical notation, enhancing code readability. Moreover, Manifold handles reverse notation gracefully. Suppose we reverse the order of the operands, such as a scalar plus a vector, Manifold swaps the order and performs the operation correctly. This flexibility enables us to write code in a more natural and intuitive manner. Let’s say we add this to the Vec class: Java public Vec plus(float other) { return new Vec(x + other, y + other, z + other); } This will make all these lines valid: Java vec3 += 5.0f; vec3 = 5.0f + vec3; vec3 = vec3 + 5.0f; vec3 += Float.valueOf(5.0f); In this code, we demonstrate that Manifold can swap the order to invoke Vec.plus(float) seamlessly. We also show that the plus equals operator support is built into the plus method support As implied by the previous code, Manifold also supports primitive wrapper objects, specifically in the context of autoboxing. In Java, primitive types have corresponding wrapper objects. Manifold handles the conversion between primitives and their wrapper objects seamlessly, thanks to autoboxing and unboxing. This enables us to work with objects and primitives interchangeably in our code. There are caveats to this, as we will find out. BigDecimal Support Manifold goes beyond simple arithmetic and supports more complex scenarios. For example, the manifold-science dependency includes built-in support for BigDecimal arithmetic. BigDecimal is a Java class used for precise calculations involving large numbers or financial computations? By using Manifold, we can perform arithmetic operations with BigDecimal objects using familiar operators, such as +, -, *, and /. Manifold's integration with BigDecimal simplifies code and ensures accurate calculations. The following code is legal once we add the right set of dependencies, which add method extensions to the BigDecimal class: Java var x = new BigDecimal(5L); var y = new BigDecimal(25L); var z = x + y; Under the hood, Manifold adds the applicable plus, minus, times, etc. methods to the class. It does so by leveraging class extensions which I discussed before. Limits of Boxing We can also extend existing classes to support operator overloading. Manifold allows us to extend classes and add methods that accept custom types or perform specific operations. For instance, we can extend the Integer class and add a plus method that accepts BigDecimal as an argument and returns a BigDecimal result. This extension enables us to perform arithmetic operations between different types seamlessly. The goal is to get this code to compile: Java var z = 5 + x + y; Unfortunately, this won’t compile with that change. The number five is a primitive, not an Integer, and the only way to get that code to work would be: Java var z = Integer.valueOf(5) + x + y; This isn’t what we want. However, there’s a simple solution. We can create an extension to BigDecimal itself and rely on the fact that the order can be swapped seamlessly. This means that this simple extension can support the 5 + x + y expression without a change: Java @Extension public class BigDecimalExt { public static BigDecimal plus(@This BigDecimal b, int i) { return b.plus(BigDecimal.valueOf(i)); } } List of Arithmetic Operators So far, we focused on the plus operator, but Manifold supports a wide range of operators. The following table lists the method name and the operators it supports: Operator Method + , += plus -, -= minus *, *= times /, /= div %, %= rem -a unaryMinus ++ inc -- dec Notice that the increment and decrement operators don’t have a distinction between the prefix and postfix positioning. Both a++ and ++a would lead to the inc method. Index Operator The support for the index operator took me completely off guard when I looked at it. This is a complete game-changer… The index operator is the square brackets we use to get an array value by index. To give you a sense of what I’m talking about, this is valid code in Manifold: Java var list = List.of("A", "B", "C"); var v = list[0]; In this case, v will be “A” and the code is the equivalent of invoking list.get(0). The index operators seamlessly map to get and set methods. We can do assignments as well using the following: Java var list = new ArrayList<>(List.of("A", "B", "C")); var v = list[0]; list[0] = "1"; Notice I had to wrap the List in an ArrayList since List.of() returns an unmodifiable List. But this isn’t the part I’m reeling about. That code is “nice.” This code is absolutely amazing: Java var map = new HashMap<>(Map.of("Key", "Value")); var key = map["Key"]; map["Key"] = "New Value"; Yes! You’re reading valid code in Manifold. An index operator is used to lookup in a map. Notice that a map has a put() method and not a set method. That’s an annoying inconsistency that Manifold fixed with an extension method. We can then use an object to look up within a map using the operator. Relational and Equality Operators We still have a lot to cover… Can we write code like this (referring to the Vec object from before): Java if(vec3 > vec2) { // … } This won’t compile by default. However, if we add the Comparable interface to the Vec class this will work as expected: Java public class Vec implements Comparable<Vec> { // … public double magnitude() { return Math.sqrt(x x + y y + z * z); } @Override public int compareTo(Vec o) { return Double.compare(magnitude(), o.magnitude()); } } These >=, >, <, <= comparison operators will work exactly as expected by invoking the compareTo method. But there’s a big problem. You will notice that the == and != operators are missing from this list. In Java, we often use these operators to perform pointer comparisons. This makes a lot of sense in terms of performance. We wouldn’t want to change something so inherent in Java. To avoid that, Manifold doesn’t override these operators by default. However, we can implement the ComparableUsing interface, which is a sub-interface of the Comparable interface. Once we do that the == and != will use the equals method by default. We can override that behavior by overriding the method equalityMode() which can return one of these values: CompareTo — will use the compareTo method for == and != Equals (the default) — will use the equals method Identity — will use pointer comparison as is the norm in Java That interface also lets us override the compareToUsing(T, Operator) method. This is similar to the compareTo method but lets us create operator-specific behavior, which might be important in some edge cases. Unit Expressions for Scientific Coding Notice that Unit expressions are experimental in Manifold. But they are one of the most interesting applications of operator overloading in this context. Unit expressions are a new type of operator that significantly simplifies and enhances scientific coding while enforcing strong typing. With unit expressions, we can define notations for mathematical expressions that incorporate unit types. This brings a new level of clarity and types of safety to scientific calculations. For example, consider a distance calculation where speed is defined as 100 miles per hour. By multiplying the speed (miles per hour) by the time (hours), we can obtain the distance as such: Java Length distance = 100 mph * 3 hr; Force force = 5kg * 9.807 m/s/s; if(force == 49.035 N) { // true } The unit expressions allow us to express numeric values (or variables) along with their associated units. The compiler checks the compatibility of units, preventing incompatible conversions and ensuring accurate calculations. This feature streamlines scientific code and enables powerful calculations with ease. Under the hood, a unit expression is just a conversion call. The expression 100 mph is converted to: Java VelocityUnit.postfixBind(Integer.valueOf(100)) This expression returns a Velocity object. The expression 3 hr is similarly bound to the postfix method and returns a Time object. At this point, the Manifold Velocity class has a times method, which, as you recall, is an operator, and it’s invoked on both results: Java public Length times( Time t ) { return new Length( toBaseNumber() * t.toBaseNumber(), LengthUnit.BASE, getDisplayUnit().getLengthUnit() ); } Notice that the class has multiple overloaded versions of the times method that accept different object types. A Velocity times Mass will produce Momentum. A Velocity times Force results in Power. Many units are supported as part of this package even in this early experimental stage, check them out here. You might notice a big omission here: Currency. I would love to have something like: Java var sum = 50 USD + 70 EUR; If you look at that code, the problem should be apparent. We need an exchange rate. This makes no sense without exchange rates and possibly conversion costs. The complexities of financial calculations don’t translate as nicely to the current state of the code. I suspect that this is the reason this is still experimental. I’m very curious to see how something like this can be solved elegantly. Pitfalls of Operator Overloading While Manifold provides powerful operator overloading capabilities, it's important to be mindful of potential challenges and performance considerations. Manifold's approach can lead to additional method calls and object allocations, which may impact performance, especially in performance-critical environments. It's crucial to consider optimization techniques, such as reducing unnecessary method calls and object allocations, to ensure efficient code execution. Let’s look at this code: Java var n = x + y + z; On the surface, it can seem efficient and short. It physically translates to this code: Java var n = x.plus(y).plus(z); This is still hard to spot but notice that in order to create the result, we invoke two methods and allocate at least two objects. A more efficient approach would be: Java var n = x.plus(y, z); This is an optimization we often do for high-performance matrix calculations. You need to be mindful of this and understand what the operator is doing under the hood if performance is important. I don’t want to imply that operators are inherently slower. In fact, they’re as fast as a method invocation, but sometimes the specific method invoked and the number of allocations are unintuitive. Type Safety Features The following aren’t related to operator overloading, but they were a part of the second video, so I feel they make sense as part of a wide-sweeping discussion on type safety. One of my favorite things about Manifold is its support of strict typing and compile time errors. To me, both represent the core spirit of Java. JailBreak: Type-Safe Reflection @JailBreak is a feature that grants access to the private state within a class. While it may sound bad, @JailBreak offers a better alternative to using traditional reflection to access private variables. By jailbreaking a class, we can access its private state seamlessly, with the compiler still performing type checks. In that sense, it’s the lesser of two evils. If you’re going to do something terrible (accessing private state), then at least have it checked by the compiler. In the following code, the value array is private to String, yet we can manipulate it thanks to the @JailBreak annotation. This code will print “Ex0osed…”: Java @Jailbreak String exposedString = "Exposed..."; exposedString.value[2] = '0'; System.out.println(exposedString); JailBreak can be applied to static fields and methods as well. However, accessing static members requires assigning null to the variable, which may seem counterintuitive. Nonetheless, this feature provides a more controlled and type-safe approach to accessing the internal state, minimizing the risks associated with using reflection. Java @Jailbreak String str = null; str.isASCII(new byte[] { 111, (byte)222 }); Finally, all objects in Manifold are injected with a jailbreak() method. This method can be used like this (notice that fastTime is a private field): Java Date d = new Date(); long t = d.jailbreak().fastTime; Self Annotation: Enforcing Method Parameter Type In Java, certain APIs accept objects as parameters, even when a more specific type could be used. This can lead to potential issues and errors at runtime. However, Manifold introduces the @Self annotation, which helps enforce the type of object passed as a parameter. By annotating the parameter with @Self, we explicitly state that only the specified object type is accepted. This ensures type safety and prevents the accidental use of incompatible types. With this annotation, the compiler catches such errors during development, reducing the likelihood of encountering issues in production. Let’s look at the MySizeClass from my previous posts: Java public class MySizeClass { int size = 5; public int size() { return size; } public void setSize(int size) { this.size = size; } public boolean equals(@Self Object o) { return o != null && ((MySizeClass)o).size == size; } } Notice I added an equals method and annotated the argument with Self. If I remove the Self annotation, this code will compile: Java var size = new MySizeClass(); size.equals(""); size.equals(new MySizeClass()); With the @Self annotation, the string comparison will fail during compilation. Auto Keyword: A Stronger Alternative to Var I’m not a huge fan of the var keyword. I feel it didn’t simplify much, and the price is coding to an implementation instead of to an interface. I understand why the devs at Oracle chose this path. Conservative decisions are the main reason I find Java so appealing. Manifold has the benefit of working outside of those constraints, and it offers a more powerful alternative called auto. auto can be used in fields and method return values, making it more flexible than var. It provides a concise and expressive way to define variables without sacrificing type safety. Auto is particularly useful when working with tuples, a feature not yet discussed in this post. It allows for elegant and concise code, enhancing readability and maintainability. You can effectively use auto as a drop-in replacement for var. Finally Operator overloading with Manifold brings expressive and intuitive mathematical notation to Java, enhancing code readability and simplicity. While Java doesn't natively support operator overloading, Manifold empowers developers to achieve similar functionality and use familiar operators in their code. By leveraging Manifold, we can write more fluid and expressive code, particularly in scientific, mathematical, and financial applications. The type of safety enhancements in Manifold makes Java more… Well, “Java-like.” It lets Java developers build upon the strong foundation of the language and embrace a more expressive type-safe programming paradigm. Should we add operator overloading to Java itself? I'm not in favor. I love that Java is slow, steady, and conservative. I also love that Manifold is bold and adventurous. That way, I can pick it when I'm doing a project where this approach makes sense (e.g., a startup project) but pick standard conservative Java for an enterprise project.
There's a good reason why Java is one of the most widely used programming languages: it's very powerful and flexible. Because of its adaptability and power, it may be used in a wide variety of applications, including the development of web applications and Android applications. However, it may be difficult for newcomers to know where to begin since there is so much information out there. But worry not! You won't need to go elsewhere after reading this article. We have compiled a list of the five best Java books for beginners, each of which is simple to read and understand while yet doing an excellent job of explaining the fundamentals of the language. These books provide a complete overview of the world of Java programming, covering everything from syntax and programming ideas to more advanced subjects such as data structures and object-oriented programming. What Is Java? Java is a widely used object-oriented programming language and flexible software platform that guides billions of devices across the globe, including computers, gaming consoles, medical equipment, and a broad variety of other types of products. Java provides developers with several advantages since it is based on the syntax and guidelines of C and C++. When it comes to the development of software, adopting Java has a number of key benefits, one of the most prominent being its remarkable portability. You can use a notebook computer to develop code and then simply move that code to any device, including mobile devices, if you are using Java. Since its creation in 1991 by James Gosling of Sun Microsystems (which is now owned by Oracle), the language has remained a top option for developers all over the globe. The language was established with the objective of "write once, run anywhere,". Java allows developers to concentrate on developing cutting-edge applications without worrying whether or not their code will work correctly on other systems. Although the terms Java and JavaScript may seem interchangeable, there is a significant difference between the two. JavaScript does not need compilation, but Java does. On top of that, in contrast to JavaScript, Java may be executed on any platform. New and enhanced software development tools are being released at a dizzying rate, driving fast change in the industry. These technologies are posing a threat to businesses that were previously considered vital; nonetheless, in the middle of all this upheaval, one language has stayed constant: Java. Even more amazing is the fact that almost two decades after its creation, Java remains the preferred language for the development of application software. Developers continuously choose it above other popular languages such as Python, Ruby, PHP, Swift, and C++. Therefore, it should come as no surprise that having knowledge of Java is necessary for everyone who wants to compete in the employment market of today. The language has been around for a long time and is very popular, which shows how reliable and useful it is. This makes it a valuable tool for coders and organizations alike. How To Determine Which Java Book Is Right for You You could feel overwhelmed if you're just starting out in programming and looking for the perfect Java book, but don't worry about it! You'll find the ideal resource quickly with the help of our educated recommendations. First and foremost, you need to evaluate your existing level of expertise. If you're just starting off, it's best to read a book that lays a solid foundation for you. Give priority to writers that have years of expertise in real-world programming and a track record of being effective in teaching Java. It is helpful to read reviews written by other customers before making a purchase decision. Readability, structure, and general effectiveness of the material as a Java guide are all things that need to be investigated. Next, take into account the time and financial limits you have. Compare the advantages of purchasing a physical book vs an e-book or online course and decide if you want a full book or a short guide. Last but not least, give some thought to the way in which you take in information best. If you learn best via direct participation, you should look for a book that is packed with a wide variety of hands-on activities and projects. If you would rather take a more theoretical approach, another option is to choose a book that explores the "why" behind Java's features and the way it operates. Top 5 Java Books for Beginners 1. Head First Java Kathy Sierra and Bert Bates' Head First Java is widely regarded as the definitive introduction to the Java programming language. This book is packed with extensive knowledge of Java programming fundamentals such as classes, threads, objects, collections, and language features. The content is delivered in a visually attractive way, and the book incorporates puzzles and games to make it easier to comprehend Java programming. This book stands out from others on the market due to the fact that it contains interviews with experienced Java programmers. These programmers are kind enough to offer their expertise and tips in order to accelerate the learning process for Java beginners. In the first chapter of Head First Java, the author takes a deep dive into the concepts of inheritance and composition. These concepts provide a terrific opportunity to improve computing practices via the process of problem-solving. In addition, the reader will find helpful tools in the form of vivid charts, memory maps, exercises, and bulleted lists throughout the course of the book to assist them in comprehending design patterns. This book, which has a total of 18 chapters and covers topics ranging from basic introductions to distributed and deployment computing, is without a doubt the best resource available for beginners who are just starting out in the world of Java programming. If you can have the greatest, why go for anything else? Grab Head First Java right now to get started on your path to becoming a Java programming expert, and get ready to open the door to a world of unimaginable opportunities. 2. Java for Dummies The book Java for Dummies written by Barry A. Burd is an excellent resource for anyone who is interested in delving into the realm of Java programming. Using the book's lucid instructions, readers can learn to design their own fundamental Java objects and become experts at code reuse. This book gives a full explanation of how Java code is processed on the CPU by providing a wealth of visual aids, including useful photographs and screenshots. However, this is not all that Java for Dummies has to offer; it goes above and beyond to provide a reading experience of the highest caliber. The book is comprised of nineteen chapters, the first of which provides readers with professional advice on how to make the most of their time spent reading the book, while the last chapter provides readers with a list of the best ten websites available to Java programmers. Along the way, readers will get familiar with enhanced features and tools introduced in Java 9, acquire knowledge of approaches and strategies for integrating smaller applications into larger applications, and acquire a thorough understanding of Java objects and code reuse. The reader will be able to comfortably overcome any programming difficulty after reading this book since it also offers helpful guidance on how to easily manage events and exceptions. Overall, Java for Dummies is a book that should be read by everyone who wants to become an expert in Java programming and push their abilities to the next level. 3. Java Programming for Beginners The book Java Programming for Beginners, by Mark Lassoff, is a great way to get started in the world of Java programming. It will walk you through the fundamentals of Java syntax as well as the complex parts of object-oriented programming. By the end of the book, you will have a thorough grasp of Java SE and be able to create GUI-based Java programs that run on Windows, macOS, and Linux computers. This book is packed with information that is both informative and entertaining, as well as challenging exercises and hundreds of code examples that can be executed and used as a learning resource. By reading this book, you will go from knowing the data types in Java through loops and conditionals, and then on to functions, classes, and file handling. The last chapter of the book provides instructions on how to deal with XML and examines the process of developing graphical user interfaces (GUIs). This book provides a practical approach to navigating the Java environment and covers all of the fundamental subjects that are necessary for a Java programmer. 4. Java: A Beginner's Guide Herbert Schildt's book Java: A Beginner's Guide is widely regarded as one of the best introductions to the Java programming language. Aspiring computer programmers should make this book, which is more than 700 pages long and contains a jewel, their primary reference since it covers all the fundamentals in an easy-to-read way. This book starts out with the fundamentals of Java syntax, compiling, and application planning, but it goes fast on to more sophisticated subjects really quickly. You'll get right into practical, hands-on lessons that push you to consider carefully the fundamental ideas behind Java programming. In addition, there is a test at the end of each chapter, so you'll have lots of opportunities to put what you've learned into practice and demonstrate that you understand it. However, what sets this book different from others on the market are the helpful insights and suggestions provided by Java programmers who have years of experience. These professionals share their insights and experiences with you, allowing you to win over everything that stands in your way. They cover anything from everyday quirks to massive challenges. It's possible that Java: A Beginner's Guide is too complex for some people, but it's ideal for those who are prepared to put in the work and Google questions as they go along. So, why should you wait? With the help of this invaluable book, you can get started on your path to becoming a Java expert right now. 5. Sams Teach Yourself Java Sams Teach Yourself Java is distinguished not only by its outstanding writing style but also by its ability to enable readers to comprehend the language in less than 24 hours. Okay, maybe 24 hours is a bit of a stretch, but the fact remains that this book is the best way to learn Java quickly. The activities are broken down into manageable chunks, and the provided explanations are both thorough and easy to follow. This book walks you through the full process of building a program by breaking it down into stages that are simple to comprehend and follow you step-by-step. You'll learn how to examine the process and apply key ideas to future tasks, which will help you understand the language better overall. Having a solid understanding of the theory that lies behind Java is one of the most essential components of producing code in that language. This is where the book really shines since it makes you think about the whole process before you write a single line of code. If you do so, you will put yourself in a position where you can easily tackle even the most difficult programming problems. Sams Teach Yourself Java is a wonderful option for anybody who wants to get a deeper grasp of the language, regardless of whether they are beginners or intermediate coders. Is It Worth Learning Java in 2023? Are you considering learning Java in 2023? There is no need to continue investigating since the answer is an obvious yes. Java is quickly becoming a crucial programming language for software developers as the focus of the globe moves more and more toward mobile applications and convenience. It's been the third most popular language among employers for the past two years, and it doesn't look like it's going to slow down anytime soon. Despite the fact that the pandemic has clearly had an effect on the number of available jobs, the demand for Java developers is still considerable. In point of fact, there is a great deal of compelling reasons why you need to study Java in the year 2023. Reasons Why You Should Seriously Consider Learning Java in 2023 Java Is Friendly for Beginners Java has an open-door policy for beginners. Java is a fantastic language that will assist you in getting your feet wet in the realm of coding and navigating your way through the complex landscape of software development. In addition, since Java programmers earn a wage that is on average higher than those who program in other languages, Java is an excellent choice for new programmers to study as they extend their language skills and advance their careers. Use of Java Is Not Going Away Anytime Soon In the last few years, Java has stayed in a pretty stable situation, with at least 60,000 jobs always open. Python has made significant progress in recent years, but this has not prevented Java from becoming the dominant programming language in use today. Java has earned its reputation as the "workhorse" language of the programming industry for a good reason. When we look into the future, we can say with absolute certainty that Java will continue to be regarded as the most effective programming language for many years to come. Because of its reliability and adaptability, it is an excellent investment for any programmer or company that aims to develop systems that will stand the test of time. Therefore, you may relax knowing that Java will not be disappearing at any time in the near future. Versatile and Flexible Language Companies were confronted with a significant obstacle during the pandemic when workers were required to work from home. Because many businesses did not have the appropriate infrastructure and equipment to support remote work, their workers were forced to utilize their own personal devices, such as laptops, mobile phones, and tablets. However, the trend toward remote work began long before the pandemic and will continue even after it has passed. Good News for Those Who Code in Java Java is a very versatile and adaptable programming language that can operate on any operating system, including Mac OS, Windows, and even Android. Java allows businesses to design their own private software with the peace of mind that it will function faultlessly across all of the devices used by their workers while maintaining the highest levels of safety, security, and reliability. Java is, without a doubt, the best answer for businesses that want to keep up with the times and provide their workers with the resources they want to be able to do their jobs from any location and at any time. Strong Support From the Community Java has been around for a number of decades now and can be thought of as one of the oldest programming languages that are still in use when compared to its competitors. Many developers utilize Java for many challenges. There is a good probability that the solutions to the majority of the issues will already be accessible, since the method to finding them may have been tried and proven before. Additionally, there are a large number of communities and groups on the internet and social media, respectively. The other developers and newcomers to the field will find that their peers in the community are eager to provide a helping hand and find solutions to the problems they are experiencing. Multiple Open-Source Libraries Are Available for Java The materials included in open-source libraries may be copied, researched, modified, altered, and shared, among others. There are a number of open-source libraries in Java, including JHipster, Maven, Google Guava, Apache Commons, and others, which may be used to make Java development simpler, more affordable, and more efficient. Java Has Powerful Development Tools Java is more than just a programming language; its Integrated Development Environments (IDEs) make it a software development juggernaut. Developers have all the resources they need to produce top-notch apps thanks to industry-leading tools like Eclipse, NetBeans, and IntelliJ IDEA. These integrated development environments (IDEs) provide a wide variety of features, ranging from code completion and automatic refactoring to debugging and syntax highlighting. Not only is it simpler to write code while using Java, but it's also quicker. When it comes to the development of back-end apps, Java is the go-to solution for 90 percent of the organizations that make up the Fortune 500. Java serves as the basis for the Android operating system. Java is also essential for cloud computing systems like Amazon Web Services and Microsoft's Windows Azure and plays a key role in data processing for Apache Hadoop. Java Can Run on a Wide Variety of Platforms Java is a platform-independent programming language because the Java source code has been broken down to byte code by the Java compiler. This byte code may then be run on any platform by utilizing the Java Virtual Machine. Because it can operate on a variety of platforms, Java is sometimes referred to as a WORA language, which stands for "write once, run anywhere." Additionally, due to the platform-independent nature of Java, the majority of Java programs are developed in a Windows environment, even if they are ultimately deployed and operate on UNIX platforms. Conclusion To summarize, Java is a reliable and extensively used programming language that is important for the process of developing a broad variety of software applications. Possessing the appropriate resources can make a substantial difference in a beginner's ability to effectively acquire the language. This article features an overview of the five Java books that are considered to be the best for beginners and come highly recommended by Java professionals and industry experts. Because they include information on fundamental aspects of programming as well as object-oriented programming, data structures, and algorithms, these books are a good place for beginners to begin their studies. Beginners may get a strong foundation in Java programming and enhance their abilities in developing complicated software programs by following the directions and examples that are offered in these books and using them as a guide.
Generics in Java In Java programming, language generics are introduced in J2SE 5 for dealing with type-safe objects. It detects bugs at the compile time by which code is made stable. Any object type is allowed to be stored in the collection before the generic introduction. Now after the generic introduction in the Java programming language, programmers are forced to store particular object types. Advantages of Java Generics Three main advantages of using generics are given below: 1. Type-Safety Generics allow to store of only a single object type. Therefore, different object types are not allowed to be stored in generics. Any type of object is allowed to be stored without generics. Java // declaring a list with the name dataList List dataList= new ArrayList(); // adding integer into the dataList dataList.add(10); // adding string data into the dataList dataList.add("10"); With generics, we need to tell the object type we want to store. Declaring a list with the name dataList: List<Integer> dataList= new ArrayList(); Adding integer into the dataList dataList.add(10); Adding string data into the dataList dataList.add("10"); // but this statement gives compile-time error 2. No Need for Type Casting Object type casting is not required with generics. It is required to do casting before a generic introduction. Java declaring a list with the name dataList List dataList= new ArrayList(); adding an element to the dataList dataList.add("hello"); typecasting String s = (String) dataList.get(0); There is no need for object type casting after generics. Java // declaring a list with the name dataList List<String> dataList= new ArrayList<String>(); // adding an element to the dataList dataList.add("hello"); //typecasting is not required String s = dataList.get(0); 3. Checking at Compile Time Issues will not occur at run time as it is checked at the compile time. And according to good programming strategies, problem handling done at compile time is far better than handling done at run time. Java // declaring a list with the name dataList List<String> dataList = new ArrayList<String>(); // adding an element into the dataList dataList .add("hello"); // try to add an integer in the dataList but this statement will give compile-time error dataList .add(32); Syntax: Java Generic collection can be used as : ClassOrInterface<Type> Example: An example of how generics are used in java is given below: ArrayList<String> Example Program of Java Generics The ArrayList class is used in this example. But in place of the ArrayList class, any class of collection framework can be used, like Comparator, HashMap, TreeSet, HashSet, LinkedList, ArrayList, etc. Java // importing packages import java.util.*; // creating a class with the name GenericsExample class GenericsExample { // main method public static void main(String args[]) { // declaring a list with the name dataList to store String elements ArrayList < String > dataList = new ArrayList < String > (); // adding an element into the dataList dataList.add("hina"); // adding an element into the dataList dataList.add("rina"); // if we try to add an integer into the dataList then it will give a compile-time error //dataList.add(32); //compile time error // accessing element from dataList String s = dataList.get(1); //no need of type casting // printing an element of the list System.out.println("element is: " + s); // for iterating over the dataList elements Iterator < String > itr = dataList.iterator(); // iterating and printing the elements of the list while (itr.hasNext()) { System.out.println(itr.next()); } } } Output: Java element is: rina hina rina Java Generics Example Using Map In this, we are using a map to demonstrate the generic example. The map allows the data storage in the form of key-value pairs. Java // importing packages import java.util.*; // creating a class with the name GenericsExample class GenericsExample { // main method public static void main(String args[]) { // declaring a map for storing keys of Integer type with String values Map < Integer, String > dataMap = new HashMap < Integer, String > (); // adding some key value into the dataMap dataMap.put(3, "seema"); dataMap.put(1, "hina"); dataMap.put(4, "rina"); // using dataMap.entrySet() Set < Map.Entry < Integer, String >> set = dataMap.entrySet(); // creating an iterator for iterating over the dataMap Iterator < Map.Entry < Integer, String >> itr = set.iterator(); // iterating for printing every key-value pair of map while (itr.hasNext()) { // type casting is not required Map.Entry e = itr.next(); System.out.println(e.getKey() + " " + e.getValue()); } } } Output: Java 1 hina 3 seema 4 rina Generic Class A generic class is a class that can refer to any type. And here, for creating a particular type of generic class, we are using the parameter of T type. The declaration of a generic class is much similar to the declaration of a non-generic class, except that the type parameter section is written after the name of the class. Parameters of one or more than one type are allowed in the type parameter section. A generic class declaration looks like a non-generic class declaration, except that the class name is followed by a type parameter section. As one or more parameters are accepted, so Parameterized types or parameterized classes are some other names for it. And the example is given below to demonstrate the use and creation of generic classes. Generic Class Creation Java class GenericClassExample < T > { T object; void addElement(T object) { this.object = object; } T get() { return object; } } Here T type represents that it can refer to any type, such as Employee, String, and Integer. The type specified by you for the class is used for data storage and retrieval. Generic Class Implementation Java Let us see an example for a better understanding of generic class usage // creating a class with the name GenericExample class GenericExample { // main method public static void main(String args[]) { // using the generic class created in the above example with the Integer type GenericClassExample < Integer > m = new GenericClassExample < Integer > (); // calling addElement for the m m.addElement(6); // if we try to call addElement with the string type element then it will give a compile-time error //m.addElement("hina"); //Compile time error System.out.println(m.get()); } } Output: Java 6 Generic Method Similar to the generic class, generic methods can also be created. And any type of argument can be accepted by the generic method. The declaration of the generic method is just similar to that of the generic type, but the scope of the type parameter is limited to the method where its declaration has been done. Generic methods are allowed to be both static and non-static. Let us understand the generic method of Java with an example. Here is an example of printing the elements of an array. Here E is used for representing elements. Java // creating a class with the name GenericExample public class GenericExample { // creating a generic method for printing the elements of an array public static < E > void printElements(E[] elements) { // iterating over elements for printing elements of an array for (E curElement: elements) { System.out.println(curElement); } System.out.println(); } // main method public static void main(String args[]) { // declaring an array having Integer type elements Integer[] arrayOfIntegerElements = { 10, 20, 30, 40, 50 }; // declaring an array having character-type elements Character[] arrayOfCharacterElements = { 'J', 'A', 'V', 'A', 'T', 'P', 'O', 'I', 'N', 'T' }; System.out.println("Printing an elements of an Integer Array"); // calling generic method printElements for integer array printElements(arrayOfIntegerElements); System.out.println("Printing an elements of an Character Array"); // calling generic method printElements for character array printElements(arrayOfCharacterElements); } } Output: Java Printing an elements of an Integer Array 10 20 30 40 50 Printing an elements of an Character Array J A V A T P O I N T Wildcard in Java Generics Wildcard elements in generics are represented by the question mark (?) symbol. And any type is represented by it. If <? extends Number> is written by us, then this means any Number such as (double, float, and Integer) child class. Now the number class method can be called from any of the child classes. Wildcards can be used as a type of local variable, return type, field, or Parameter. But the wildcards can not be used as type arguments for the invocation of the generic method or the creation of an instance of generic. Let us understand the wildcards in Java generics with the help of the example below given: Java // importing packages import java.util.*; // creating an abstract class with the name Animal abstract class Animal { // creating an abstract method with the name eat abstract void eat(); } // creating a class with the name Cat which inherits the Animal class class Cat extends Animal { void eat() { System.out.println("Cat can eat"); } } // creating a class with the name Dog which inherits the Animal class class Dog extends Animal { void eat() { System.out.println("Dog can eat"); } } // creating a class for testing the wildcards of java generics class GenericsExample { //creating a method by which only Animal child classes are accepted public static void animalEat(List << ? extends Animal > lists) { for (Animal a: lists) { //Animal class method calling by the instance of the child class a.eat(); } } // main method public static void main(String args[]) { // creating a list of type Cat List < Cat > list = new ArrayList < Cat > (); list.add(new Cat()); list.add(new Cat()); list.add(new Cat()); // creating a list of type Dog List < Dog > list1 = new ArrayList < Dog > (); list1.add(new Dog()); list1.add(new Dog()); // calling animalEat for list animalEat(list); // calling animalEat for list1 animalEat(list1); } } Output: Java Cat can eat Cat can eat Cat can eat Dog can eat Dog can eat Upper Bounded Wildcards The main objective of using upper-bounded wildcards is to reduce the variable restrictions. An unknown type is restricted by it to be a particular type or subtype of a particular type. Upper Bounded Wildcards are used by writing a question mark symbol, then extending the keyword if there is a class and implementing a keyword for the interface, and then the upper bound is written. Syntax of Upper Bound Wildcard ? extends Type. Example of Upper Bound Wildcard Let us understand the Upper Bound Wildcard with an example. Here upper bound wildcards are used by us for List<Double> and List<Integer> method writing. Java // importing packages import java.util.ArrayList; // creating a class with the name UpperBoundWildcardExample public class UpperBoundWildcardExample { // creating a method by using upper bounded wildcards private static Double sum(ArrayList << ? extends Number > list) { double add = 0.0; for (Number n: list) { add = add + n.doubleValue(); } return add; } // main method public static void main(String[] args) { // creating a list of integer type ArrayList < Integer > list1 = new ArrayList < Integer > (); // adding elements to the list1 list1.add(30); list1.add(40); // calling sum method for printing sum System.out.println("Sum is= " + sum(list1)); // creating a list of double type ArrayList < Double > list2 = new ArrayList < Double > (); list2.add(10.0); list2.add(20.0); // calling sum method for printing sum System.out.println("Sum is= " + sum(list2)); } } Output: Java Sum is= 70.0 Sum is= 30.0 Unbounded Wildcards Unknown type list is specified by the unbounded wildcards like List<?>. Example of Unbounded Wildcards Java // importing packages import java.util.Arrays; import java.util.List; // creating a class with the name UnboundedWildcardExample public class UnboundedWildcardExample { // creating a method displayElements by using Unbounded Wildcard public static void displayElements(List << ? > list) { for (Object n: list) { System.out.println(n); } } // main method public static void main(String[] args) { // creating a list of type integer List < Integer > list1 = Arrays.asList(6, 7, 8); System.out.println("printing the values of integer list"); // calling displayElements for list1 displayElements(list1); // creating a list of type string List < String > list2 = Arrays.asList("six", "seven", "eight"); System.out.println("printing the values of string list"); // calling displayElements for list2 displayElements(list2); } } Output: Java printing the values of integer list 6 7 8 printing the values of string list six seven eight Lower Bounded Wildcards Lower Bounded Wildcards are used to restrict the unknown type to be a particular type or the supertype of the particular type. Lower Bounded Wildcards are used by writing a question mark symbol followed by the keyword super, then writing the lower bound. Syntax of Lower Bound Wildcard ? super Type. Example of Lower Bound Wildcard Java // importing packages import java.util.*; // creating a class with the name LowerBoundWildcardExample public class LowerBoundWildcardExample { // creating a method by using upper bounded wildcards private static void displayElements(List << ? super Integer > list) { for (Object n: list) { System.out.println(n); } } // main method public static void main(String[] args) { // creating a list of type integer List < Integer > list1 = Arrays.asList(6, 7, 8); System.out.println("printing the values of integer list"); // calling displayElements for list1 displayElements(list1); // creating a list of type string List < Number > list2 = Arrays.asList(8.0, 9.8, 7.6); System.out.println("printing the values of string list"); // calling displayElements for list2 displayElements(list2); } } Output: Java printing the values of integer list 6 7 8 printing the values of string list 8.0 9.8 7.6 Conclusion After the generic introduction in the Java programming language, programmers are forced to store particular object types. Type safety, No need for type casting, and Checking at compile time are Three main advantages of using generics. A generic class is a class that can refer to any type. Similar to the generic class, generic methods can also be created. And any type of argument can be accepted by the generic method. Wildcard elements in generics are represented by the question mark (?) symbol. Upper Bounded, Lower Bounded, and Unbounded are three types of wildcards in Java generic.
Much has been written about the impact of AI tooling on software development, or indeed on any creative endeavor. Some of those blogs may already be written by AI, who knows? If the benefits for mundane coding tasks today are any foretaste of what lies ahead, I dare not contemplate what the next year will bring, let alone the next decade. I’m not overly worried. The price of job security was always continuous upgrading of your skillset - which is why I’m studying for the Oracle Certified Professional Java SE 17 developer exam. The OCP is reassuringly and infuriatingly old-school. It grills you on arrays, shorts, ObjectOutputStream, the flip bit operator ~, and much you’re probably not going to write or encounter. What is the point? I’ll tell you. On the one hand, the programming profession has changed beyond recognition from when I started in 1999 and long before that. I look forward to veteran Jim Highsmith’s upcoming book Wild West to Agile. It’s supposed to be liberally sprinkled with personal anecdotes from the era of punch cards and overnight compiles. The teasers remind me of the classic Four Yorkshiremen sketch by Monty Python, boasting how tough they had it. “We lived eighteen to a room! – Luxury! We lived in a septic tank.” On the other hand, much less has changed at the level of methods and classes or loops and logic, despite the mushrooming complexity and range of APIs and tooling. Real language innovations are rare and the challenges for learners remain the same. Autocomplete doesn’t make understanding a tail recursive function any easier, but before Stack Overflow, it made sense to memorize such common patterns, because it was too much bother to look them up. It still makes sense. Professor Dijkstra Would Not Approve of Copilot You can safely assume that a famous Turing award winner like Edsger Dijkstra (1930-2002), would have been horrified by GitHub Copilot. He preferred doing his mathematical proofs on a blackboard and believed that software engineering had little to do with academic computer science and that the latter was not even about computers. After all, we don’t call a surgeon’s work "knife-science." Studying for the OCP means honing your Spartan mindset. Taking it unprepared, even as an experienced developer, is a waste of money. You will fail. Because it’s not a test of your design intuitions or clean coding hygiene. It calls on your knowledge of arcane details and APIs, but even more on your short-term memory skills to grasp some quite insane code. Boy, have these skills gone rusty! I wrote earlier that IntelliJ has made us all a bit stupid and it’s true. I’m still making plenty of mistakes. Factual knowledge gaps don’t trip me up anymore, but the time constraint does. The two-minute average you can spend on each question is tight. Yes, there are short questions requiring a single answer, but they don’t offset the ones with convoluted sample code, where you rack your brains for five minutes over the effect of changing one statement and fail to spot the missing semicolon after a switch expression, so the whole mess would never compile. Three Reasons Not To Bother There are reasons not to bother with this self-torture, but there’s a flavor of cognitive dissonance to them. They’re attractive to let yourself off the hook. First, what’s the point of playing human compiler and virtual machine over code samples that solve no real-world task and are only designed to confuse? The point is to train your mind muscles, to sharpen the saw. Nobody disputes that IDEs and their real-time compilation warnings are a great productivity boost. Nobody edits a big Java project in a plaintext editor. That would be inefficient and error-prone. But you want to know what you’re doing and at least understand the warnings. I don’t do dumbbell exercises for the sake of dumbbell exercises. I do them so I can still lift my own groceries when I retire. Secondly, neither this OCP nor any of its predecessors teach you how to write clean code, much less design a complex product. It has nothing to say about testing. It’s a thorough foundation of the language toolkit, but no more. Calling the exam inadequate for that reason is a strawman argument. You pooh-pooh it for not teaching you something it never claimed to teach you in the first place. If you take an advanced French grammar course, it won’t teach you how to write a novel either. A third bone of contention is the OCP’s focus on little-used and legacy features. Who uses serializable object streams when the whole world has been using JSON for years? Well, there’s an awful lot of twenty-year-old, pre-version 5 legacy around, and you shouldn’t be taken aback by it. Also, in the makers’ defense: deprecated features or ways of working do eventually make their way to the exit. The SCJP 6 I took in 2010 had some tough questions on low-level thread handling, all of which are now abstracted behind the concurrency API. We can expect arrays to go the same way, but no time soon. To Be Continued I have much more to say on each of the topics I raised, so I want to make this a series of blogs. I want to explore and explain my personal motivations throughout the process and hope to share useful advice on how to make the journey a success. The aim is not cramming to get a certificate and promptly forget what you learned. You’re not in high school. The aim is to respect the importance of certain mental skills we shouldn’t allow to get rusty. This will make you a better and happier software developer. I have the following topics in mind for the next months. Motivation and the "okay point." Do you enjoy learning for learning’s sake, or is it a means to an end? If so, you will master the bare minimum you need to get the job done and give up once you reach the okay point. This happens to seniors, especially when they are burdened with many non-coding tasks. The only effective way to learn is to make it fun, compelling, and practical. Always learn with the IDE at your side and disable all the clever assistants. I’m compiling a collection of mnemonics and rhymes, which I hope to expand. When it comes to remembering, the sillier and crazier the better – actually, the lewder the better, but I’ll leave that to your own imagination.
This is the second article to teach developers how to make serverless Java for dynamic data processing with a NoSQL database. In the previous article, you learned how to design an entity class and implement abstract services to bind the DynamoDB client for the REST APIs locally. In case, you haven’t already read it, find the first tutorial here. You can also find the piggybank project in the GitHub repository. Let’s go into the outer loop practices in production using AWS Lambda and DynamoDB. Creating a Serverless Database Using Amazon DynamoDB Note that you’ll create multiple AWS services to go through in the following tutorial. In case you already haven’t an AWS account, proceed with this documentation and configure an AWS credential on your local environment. Use the AWS DynamoDB API command to create a new table and entry in AWS DynamoDB as your production. Find more information about Setting Up DynamoDB (Web Service). Run the following AWS command in the AWS CloudShell or your local terminal. Shell aws dynamodb create-table \ --table-name entry \ --attribute-definitions \ AttributeName=accountID,AttributeType=S \ AttributeName=timestamp,AttributeType=N \ --key-schema \ AttributeName=accountID,KeyType=HASH \ AttributeName=timestamp,KeyType=RANGE \ --provisioned-throughput \ ReadCapacityUnits=5,WriteCapacityUnits=5 \ --table-class STANDARD The output should look like this. YAML { "TableDescription": { "AttributeDefinitions": [ { "AttributeName": "accountID", "AttributeType": "S" }, { "AttributeName": "timestamp", "AttributeType": "N" } ], "TableName": "entry", "KeySchema": [ { "AttributeName": "accountID", "KeyType": "HASH" }, { "AttributeName": "timestamp", "KeyType": "RANGE" } ], "TableStatus": "CREATING", "CreationDateTime": "2023-04-28T11:51:51.656000-07:00", "ProvisionedThroughput": { { "TableDescription": { "AttributeDefinitions": [ { "AttributeName": "accountID", "AttributeType": "S" }, { "AttributeName": "timestamp", "AttributeType": "N" } ], "TableName": "entry11", "KeySchema": [ { "AttributeName": "accountID", "KeyType": "HASH" }, { "AttributeName": "timestamp", "KeyType": "RANGE" } ], "TableStatus": "CREATING", "CreationDateTime": "2023-04-28T11:51:51.656000-07:00", "ProvisionedThroughput": { "NumberOfDecreasesToday": 0, "ReadCapacityUnits": 5, "WriteCapacityUnits": 5 }, "TableSizeBytes": 0, "ItemCount": 0, "TableArn": "arn:aws:dynamodb:us-east-1:649770145326:table/entry11", "TableId": "32be22b2-33d4-4132-81f4-dfc18a402847", "TableClassSummary": { "TableClass": "STANDARD" } } } Go to DynamoDB > Tables in the AWS web console. Then, verify if the new Entry table was created properly, as below in Figure 1. Figure 1: A table in DynamoDB Build Your Data Processing Application as a Serverless Function If you have already experienced deploying applications to AWS Lambda, you should have learned how to build and deploy the application using AWS Serverless Application Model (SAM). A big challenge for you, the developer, is to learn and memorize a variety of AWS commands for those tasks. Don’t worry about them anymore since Quarkus enables you to build, package, and deploy your Java applications to AWS Lambda without the steep learning curve. Add the following Quarkus AWS extension using the Quarkus command. Shell quarkus ext add amazon-lambda-http The output should look like this. Shell [SUCCESS] ✅ Extension io.quarkus:quarkus-amazon-lambda-http has been installed Build the application using the following Quarkus command. Shell quarkus build --no-tests The output should end with BUILD SUCCESS. Inspect generated files in the target directory: function.zip - Lambda deployment file bootstrap-example.sh - Example bootstrap script for native deployments sam.jvm.yaml - (Optional) For use with SAM command and local testing only sam.native.yaml - (Optional) For use with SAM command and native local testing only Creating a Deployment Template To access the Amazon DynamoDB with an advanced security configuration, you need to create your own AWS SAM template before you deploy a new AWS Lambda function. Create a new template.yml file in the root directory of the piggybank project. Add the following code to specify an AWS Lambda function. YAML AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > PiggyBank AWS SAM application Resources: Piggybank: Type: AWS::Serverless::Function Properties: Handler: io.quarkus.amazon.lambda.runtime.QuarkusStreamHandler::handleRequest Runtime: java17 CodeUri: target/function.zip MemorySize: 1024 SnapStart: ApplyOn: PublishedVersions AutoPublishAlias: snap Policies: - DynamoDBCrudPolicy: TableName: entry Timeout: 15 Environment: Variables: JAVA_TOOL_OPTIONS: "-XX:+TieredCompilation -XX:TieredStopAtLevel=1" Events: HttpApiEvent: Type: HttpApi Outputs: PiggybankApi: Description: URL for application Value: !Sub 'https://${ServerlessHttpApi}.execute-api.${AWS::Region}.amazonaws.com/' Export: Name: PiggybankApi Enabling SnapStart Optimizations A big challenge comes to developers when serverless Java is adopted with even various benefits below: Improve scalability, performance, and security Reduce wasting of the allocating resources on-demand Reduce cost, even scaling down to zero It’s too slow startup time along with a cold start strategy which usually takes a few seconds. GraalVM Native Image integration enables Java developers to overcome this challenge because the executable image contains the application code, required libraries, Java APIs, and reduced VMs. The smaller VM base improves the startup time of the application and produces a minimal disk footprint. However, there’re tradeoffs to using the native executables such as the lack of debugging, monitoring, peak throughput, reduced max latency, and developer experience. What if you could still have fast startup time as much as the native image but you can keep using Java virtual machine (JVM) to run serverless functions? AWS Lambda SnapStart is a snapshotting and restores mechanism reducing drastically the cold startup time of Java functions on AWS. You'll use the SnapStart to optimize our Java serverless function on AWS Lambda. Find more information on how to improve startup performance with Lambda SnapStart. Quarkus Amazon Lambda extension enables the SnapStart feature automatically when you deploy the applications to AWS Lambda. By all means, you can easily turn it off in the application.properties. Properties files quarkus.snapstart.enabled=true|false Deploying the Function to AWS Lambda Let’s deploy your function application to AWS Lambda using the SAM command: Shell sam deploy -g The output should look like this. Make sure to key “y” in the “Piggybank may not have authorization defined, Is this okay?”. Shell Configuring SAM deploy ====================== Looking for config file [samconfig.toml] : Not found Setting default arguments for 'sam deploy' ========================================= Stack Name [sam-app]: AWS Region [YOUR-REGION]: #Shows you resources changes to be deployed and require a 'Y' to initiate deploy Confirm changes before deploy [y/N]: #SAM needs permission to be able to create roles to connect to the resources in your template Allow SAM CLI IAM role creation [Y/n]: #Preserves the state of previously provisioned resources when an operation fails Disable rollback [y/N]: Piggybank may not have authorization defined, Is this okay? [y/N]: y Save arguments to configuration file [Y/n]: SAM configuration file [samconfig.toml]: SAM configuration environment [default]: Looking for resources needed for deployment: Creating the required resources... ... Once deployed, go to the AWS Lambda page in the AWS web console. Then, you will see the deployed new serverless function on AWS Lambda. Figure 2: Quarkus function on AWS Lambda You can retrieve an HTTP API endpoint that is generated automatically using the following AWS command: Shell export API_URL=$(aws cloudformation describe-stacks --query 'Stacks[0].Outputs[?OutputKey==`PiggybankApi`].OutputValue' --output text) echo $API_URL Add a few account data to the AWS DynamoDB (web service) using the CURL command: Shell curl -X POST ${API_URL}/entryResource -H 'Content-Type: application/json' -d '{"accountID" : "bofa","category": "Food", "description": "Shrimp", "amount": "-20", "balance": "0", "date": "2023-02-01+11:12"}' curl -X POST ${API_URL}/entryResource -H 'Content-Type: application/json' -d '{"accountID" : "bofa","category": "Car", "description": "Flat tires", "amount": "-200", "balance": "0", "date": "2023-03-01+09:30"}' curl -X POST ${API_URL}/entryResource -H 'Content-Type: application/json' -d '{"accountID" : "bofa","category": "Payslip", "description": "Income", "amount": "2000", "balance": "0", "date": "2023-04-01+23:00"}' curl -X POST ${API_URL}/entryResource -H 'Content-Type: application/json' -d '{"accountID" : "bofa","category": "Utilities", "description": "Gas", "amount": "-400", "balance": "0", "date": "2023-05-01+01:01"}' Go back to the AWS web console. Then, navigate to Amazon DynamoDB > Tables and select the entry table. Explore table items. It should look like this: Figure 3: A table in AWS DynamoDB Great Job! You just deployed new serverless Java functions on AWS Lambda! Conclusion You learned how Quarkus enables developers to deploy serverless functions on AWS Lambda that connect AWS DyanoDB to process dynamic data. Quarkus also enables AWS Lambda SnapStart automatically for faster startup time as fast as native executables based on GraalVM. You might have one question when you need to use JVM with enabling the SnapStart or native binary with GraalVM integration. It depends on the goals you want to achieve with your Java applications regardless of serverless or not. Take a look at Figure 4! Figure 4: Comparision of JVM and GraalVM For example, if you want to more care about a low memory footprint and small package size, native binary on GraalVM is better for you. On the other hand, JVM should be better for peak throughput, monitoring, and debugging.
Previously we checked on ReentRantLock and its fairness. One of the things we can stumble upon is the creation of a Condition. By using Condition, we can create mechanisms that allow threads to wait for specific conditions to be met before proceeding with their execution. Java public interface Condition { void await() throws InterruptedException; void awaitUninterruptibly(); long awaitNanos(long nanosTimeout) throws InterruptedException; boolean await(long time, TimeUnit unit) throws InterruptedException; boolean awaitUntil(Date deadline) throws InterruptedException; void signal(); void signalAll(); } The closest we came to that so far is the wait Object Monitor method. A Condition is bound to a Lock and a thread cannot interact with a Condition and its methods if it does not have a hold on that Lock. Also, Condition uses the underlying lock mechanisms. For example, signal and signalAll will use the underlying queue of the threads that are maintained by the Lock, and will notify them to wake up. One of the obvious things to implement using Conditions is a BlockingQueue. Worker threads process data and publisher threads dispatch data. Data are published on a queue, worker threads will process data from the queue, and then they should wait if there is no data in the queue. For a worker thread, if the condition is met the flow is the following: Acquire the lock Check the condition Process data Release the lock If the condition is not met, the flow would slightly change to this: Acquire the lock Check the condition Wait until the condition is met Re-acquire the lock Process data Release the lock The publisher thread, whenever it adds a message, should notify the threads waiting on the condition. The workflow would be like this: Acquire the lock Publish data Notify the workers Release the lock Obviously, this functionality already exists through the BlockingQueue interface and the LinkedBlockingDeque and ArrayBlockingQueue implementations. We will proceed with an implementation for the sake of the example. Let’s see the message queue: Java package com.gkatzioura.concurrency.lock.condition; import java.util.LinkedList; import java.util.Queue; import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; public class MessageQueue<T> { private Queue<T> queue = new LinkedList<>(); private Lock lock = new ReentrantLock(); private Condition hasMessages = lock.newCondition(); public void publish(T message) { lock.lock(); try { queue.offer(message); hasMessages.signal(); } finally { lock.unlock(); } } public T receive() throws InterruptedException { lock.lock(); try { while (queue.isEmpty()) { hasMessages.await(); } return queue.poll(); } finally { lock.unlock(); } } } Now let’s put it into action: Java MessageQueue<String> messageQueue = new MessageQueue<>(); @Test void testPublish() throws InterruptedException { Thread publisher = new Thread(() -> { for (int i = 0; i < 10; i++) { String message = "Sending message num: " + i; log.info("Sending [{}]", message); messageQueue.publish(message); try { Thread.sleep(1000); } catch (InterruptedException e) { throw new RuntimeException(e); } } }); Thread worker1 = new Thread(() -> { for (int i = 0; i < 5; i++) { try { String message = messageQueue.receive(); log.info("Received: [{}]", message); } catch (InterruptedException e) { throw new RuntimeException(e); } } }); Thread worker2 = new Thread(() -> { for (int i = 0; i < 5; i++) { try { String message = messageQueue.receive(); log.info("Received: [{}]", message); } catch (InterruptedException e) { throw new RuntimeException(e); } } }); publisher.start(); worker1.start(); worker2.start(); publisher.join(); worker1.join(); worker2.join(); } That’s it! Our workers processed the expected messages and waited when the queue was empty.
Integrating MicroStream and PostgreSQL, leveraging the new Jakarta EE specifications known as Jakarta Data, presents a powerful solution for developing ultrafast applications with SQL databases. MicroStream is a high-performance, in-memory object graph persistence library that enables efficient data storage and retrieval. At the same time, PostgreSQL is a widely used, robust SQL database system known for its reliability and scalability. Developers can achieve remarkable application performance and efficiency by combining these technologies' strengths and harnessing Jakarta Data's capabilities. This article will explore the integration between MicroStream and PostgreSQL, focusing on leveraging Jakarta Data to enhance the development process and create ultrafast applications. We will delve into the key features and benefits of MicroStream and PostgreSQL, highlighting their respective strengths and use cases. Furthermore, we will dive into the Jakarta Data specifications, which provide a standardized approach to working with data in Jakarta EE applications, and how they enable seamless integration between MicroStream and PostgreSQL. By the end of this article, you will have a comprehensive understanding of how to leverage MicroStream, PostgreSQL, and Jakarta Data to build high-performance applications that combine the benefits of in-memory storage and SQL databases. Facing the Java and SQL Integration Challenge The biggest challenge in integrating SQL databases with Java applications is the impedance mismatch between the object-oriented programming (OOP) paradigm used in Java and the relational database model used by SQL. This impedance mismatch refers to the fundamental differences in how data is structured and manipulated in these two paradigms, leading to the need for conversion and mapping between the object-oriented world and the relational database world. Java is known for its powerful OOP features, such as encapsulation, polymorphism, and inheritance, which enable developers to create modular, maintainable, and readable code. However, these concepts do not directly translate to the relational database model, where data is stored in tables with rows and columns. As a result, when working with SQL databases, developers often have to perform tedious and error-prone tasks of mapping Java objects to database tables and converting between their respective representations. This impedance mismatch not only hinders productivity but also consumes significant computational power. According to some estimates, up to 90% of computing power can be consumed by the conversion and mapping processes between Java objects and SQL databases. It impacts performance and increases the cost of cloud resources, making it a concern for organizations following FinOps practices. MicroStream addresses this challenge with its in-memory object graph persistence approach by eliminating the need for a separate SQL database and the associated mapping process. With MicroStream, Java objects can be stored directly in memory without the overhead of conversions to and from a relational database. It results in significant performance improvements and reduces the power consumption required for data mapping. By using MicroStream, developers can leverage the natural OOP capabilities of Java, such as encapsulation and polymorphism, without the need for extensive mapping and conversion. It leads to cleaner and more maintainable code and reduces the complexity and cost of managing a separate database system. In the context of a cloud environment, the reduction in power consumption provided by MicroStream translates to cost savings, aligning with the principles of the FinOps culture. Organizations can optimize their cloud infrastructure usage and reduce operational expenses by minimizing the resources needed for data mapping and conversion. Overall, MicroStream helps alleviate the impedance mismatch challenge between SQL databases and Java, enabling developers to build high-performance applications that take advantage of OOP's natural design and readability while reducing power consumption and costs associated with data mapping. While addressing the impedance mismatch between SQL databases and Java applications can bring several advantages, it is vital to consider the trade-offs involved. Here are some trade-offs associated with the impedance mismatch: Increased complexity: Working with an impedance mismatch adds complexity to the development process. Developers need to manage and maintain the mapping between the object-oriented model and the relational database model, which can introduce additional layers of code and increase the overall complexity of the application. Performance overhead: The conversion and mapping process between Java objects and SQL databases can introduce performance overhead. The need to transform data structures and execute database queries can impact the overall application performance, especially when dealing with large datasets or complex data models. Development time and effort: Addressing the impedance mismatch often requires writing additional code for mapping and conversion, which adds to the development time and effort. Developers need to implement and maintain the necessary logic to synchronize data between the object-oriented model and the relational database, which can increase the development effort and introduce potential sources of errors. Maintenance challenges: When an impedance mismatch exists, any changes to the object-oriented model or the database schema may require mapping and conversion logic updates. This can create maintenance challenges, as modifications to one side of the system may necessitate adjustments on the other side to ensure consistency and proper data handling. Learning curve: Dealing with the impedance mismatch typically requires understanding the intricacies of both the object-oriented paradigm and the relational database model. Developers must have a good grasp of SQL, database design, and mapping techniques. This may introduce a learning curve for those more accustomed to working solely in the object-oriented domain. It is essential to weigh these trade-offs against the benefits and specific requirements of the application. Different scenarios may prioritize various aspects, such as performance, development speed, or long-term maintenance. Alternative solutions like MicroStream can help mitigate these trade-offs by providing a direct object storage approach and reducing the complexity and performance overhead associated with the impedance mismatch. Enough for today's theory; let’s see this integration in practice. It will be a simple application using Java, Maven, and Java SE. The first step is to have an installed PostgreSQL. To make it easier, let’s use docker and run the following command: Shell docker run --rm=true --name postgres-instance -e POSTGRES_USER=micronaut \ -e POSTGRES_PASSWORD=micronaut -e POSTGRES_DB=airplane \ -p 5432:5432 postgres:14.1 Ultrafast With PostgreSQL and MicroStream In this example, let’s use an airplane sample where we’ll have several planes and models that we’ll filter by manufacturer. The first step of our project is about the Maven dependencies. Besides the CDI, we need to include the MicroStream integration following the MicroStream relational integration, and furthermore, the PostgreSQL driver. XML <dependency> <groupId>expert.os.integration</groupId> <artifactId>microstream-jakarta-data</artifactId> <version>${microstream.data.version}</version> </dependency> <dependency> <groupId>one.microstream</groupId> <artifactId>microstream-afs-sql</artifactId> <version>${microstream.version}</version> </dependency> <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <version>42.2.14</version> </dependency> The second step is to overwrite the configuration for using a relational database. First, create DataSource, and then we’ll inject it and then use it on the StorageManager. Java @ApplicationScoped class DataSourceSupplier implements Supplier<DataSource> { private static final String JDBC = "microstream.postgresql.jdbc"; private static final String USER = "microstream.postgresql.user"; private static final String PASSWORD = "microstream.postgresql.password"; @Override @Produces @ApplicationScoped public DataSource get() { Config config = ConfigProvider.getConfig(); PGSimpleDataSource dataSource = new PGSimpleDataSource(); dataSource.setUrl(config.getValue(JDBC, String.class)); dataSource.setUser(config.getValue(USER, String.class)); dataSource.setPassword(config.getValue(PASSWORD, String.class)); return dataSource; } } @Alternative @Priority(Interceptor.Priority.APPLICATION) @ApplicationScoped class SQLSupplier implements Supplier<StorageManager> { @Inject private DataSource dataSource; @Override @Produces @ApplicationScoped public StorageManager get() { SqlFileSystem fileSystem = SqlFileSystem.New( SqlConnector.Caching( SqlProviderPostgres.New(dataSource) ) ); return EmbeddedStorage.start(fileSystem.ensureDirectoryPath("microstream_storage")); } public void close(@Disposes StorageManager manager) { manager.close(); } } With the configuration ready, the next step is to create the entity and its repository. In our sample, we’ll make Airplane and Airport as entity and repository, respectively. Java @Repository public interface Airport extends CrudRepository<Airplane, String> { List<Airplane> findByModel(String model); } @Entity public class Airplane { @Id private String id; @Column("title") private String model; @Column("year") private Year year; @Column private String manufacturer; } The last step is executing the application, creating airplanes, and filtering by the manufacturer. Thanks to the Jakarta EE and MicroProfile specifications, the integration works with microservices and monolith. Java public static void main(String[] args) { try (SeContainer container = SeContainerInitializer.newInstance().initialize()) { Airplane airplane = Airplane.id("1").model("777").year(1994).manufacturer("Boing"); Airplane airplane2 = Airplane.id("2").model("767").year(1982).manufacturer("Boing"); Airplane airplane3 = Airplane.id("3").model("747-8").year(2010).manufacturer("Boing"); Airplane airplane4 = Airplane.id("4").model("E-175").year(2023).manufacturer("Embraer"); Airplane airplane5 = Airplane.id("5").model("A319").year(1995).manufacturer("Airbus"); Airport airport = container.select(Airport.class).get(); airport.saveAll(List.of(airplane, airplane2, airplane3, airplane4, airplane5)); var boings = airport.findByModel(airplane.getModel()); var all = airport.findAll().toList(); System.out.println("The boings: " + boings); System.out.println("The boing models avialables: " + boings.size()); System.out.println("The airport total: " + all.size()); } System.exit(0); } Conclusion In conclusion, the impedance mismatch between SQL databases and Java applications presents significant challenges in terms of complexity, performance, development effort, maintenance, and the learning curve. However, by understanding these trade-offs and exploring alternative solutions, such as MicroStream, developers can mitigate these challenges and achieve better outcomes. MicroStream offers a powerful approach to address the impedance mismatch by eliminating the need for a separate SQL database and reducing the complexity of mapping and conversion processes. With MicroStream, developers can leverage the natural benefits of object-oriented programming in Java without sacrificing performance or increasing computational overhead. By storing Java objects directly in memory, MicroStream enables efficient data storage and retrieval, resulting in improved application performance. It eliminates the need for complex mapping logic and reduces the development effort required to synchronize data between the object-oriented model and the relational database. Moreover, MicroStream aligns with the principles of FinOps culture by reducing power consumption, which translates into cost savings in cloud environments. By optimizing resource usage and minimizing the need for data mapping and conversion, MicroStream contributes to a more cost-effective and efficient application architecture. While trade-offs are associated with impedance mismatch, such as increased complexity and maintenance challenges, MicroStream offers a viable solution that balances these trade-offs and enables developers to build ultrafast applications with SQL databases. By leveraging the power of Jakarta Data specifications and MicroStream's in-memory object graph persistence, developers can achieve a harmonious integration between Java and SQL databases, enhancing application performance and reducing development complexities. In the rapidly evolving application development landscape, understanding the challenges and available solutions for impedance mismatch is crucial. With MicroStream, developers can embrace the advantages of object-oriented programming while seamlessly integrating with SQL databases, paving the way for efficient, scalable, and high-performance applications. Source: MicroStream Integration on GitHub
Data loss is one of the biggest problems developers face when building distributed systems. Whether due to network issues or code bugs, data loss can have serious consequences for enterprises. In this article, we'll look at how to build Kafka listeners with Spring Boot and how to use Kafka's acknowledgment mechanisms to prevent data loss and ensure the reliability of our systems. Apache Kafka Apache Kafka is a distributed message platform used to store and deliver messages. Once a message is written to Kafka, it will be kept there according to a retention policy. The consumer groups mechanism is used to read out messages. The offset for each consumer group is used to understand the stage of message processing and to keep track of the progress of each consumer group in reading messages from a partition. It allows each consumer group to independently read messages from a topic and resume reading from where it left off in case of failures or restarts. In a simplified way, this can be represented as follows: After successfully processing a message, a consumer sends an acknowledgment to Kafka, and the offset pointer for that consumer group is shifted. As mentioned earlier, other consumer groups store their offset values in the message broker, allowing messages to be read independently. When we talk about high-reliability systems that must guarantee no data loss, we must consider all possible scenarios. Apache Kafka, by design, already has the features to ensure reliability. We, as consumers of messages, must also provide proper reliability. But what can go wrong? The consumer receives the message and crashes before he can process it The consumer receives the message, processes it, and then crashes Any network problems This can happen for reasons beyond our control — temporary network unavailability, an incident on the instance, pod eviction in a K8s cluster, etc. Kafka allows guaranteeing message delivery using the acknowledgment mechanism — at least once delivery. It means that the message will be delivered at least once, but under certain circumstances, it can be delivered several times. All we need to do is to configure Apache Kafka correctly and be able to react to duplicate messages if needed. Let's try to implement this in practice. Run Apache Kafka To start the message broker, we also need the zookeeper. The easiest way to do this is with docker-compose. Create the file docker-compose.yml: YAML --- version: '3' services: zookeeper: image: confluentinc/cp-zookeeper:7.3.3 container_name: zookeeper environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 broker: image: confluentinc/cp-kafka:7.3.3 container_name: broker ports: - "9092:9092" depends_on: - zookeeper environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1 Create a new topic: Shell docker exec broker \ kafka-topics --bootstrap-server broker:9092 \ --create \ --topic demo To produce messages, you can run the command: Shell docker exec -ti broker \ kafka-console-producer --bootstrap-server broker:9092 \ --topic demo Each line is a new message. When finished, press Ctrl+C: Shell >first >second >third >^C% Messages have been written and will be stored in Apache Kafka. Spring Boot Application Create a gradle project and add the necessary dependencies to build.gradle: Groovy plugins { id 'java' id 'org.springframework.boot' version '2.7.10' id 'io.spring.dependency-management' version '1.0.15.RELEASE' } group = 'com.example' version = '0.0.1-SNAPSHOT' sourceCompatibility = '17' repositories { mavenCentral() } dependencies { implementation 'org.springframework.boot:spring-boot-starter' implementation 'org.springframework.kafka:spring-kafka' compileOnly 'org.projectlombok:lombok:1.18.26' annotationProcessor 'org.projectlombok:lombok:1.18.26' testImplementation 'org.springframework.boot:spring-boot-starter-test' testImplementation 'org.springframework.kafka:spring-kafka-test' testCompileOnly 'org.projectlombok:lombok:1.18.26' testAnnotationProcessor 'org.projectlombok:lombok:1.18.26' } application.yml: YAML spring: kafka: consumer: bootstrap-servers: localhost:9092 group-id: demo-group auto-offset-reset: earliest key-deserializer: org.apache.kafka.common.serialization.StringDeserializer value-deserializer: org.apache.kafka.common.serialization.StringDeserializer Let's write an event handler: Java @Component @Slf4j public class DemoListener { @KafkaListener(topics = "demo", groupId = "demo-group") void processKafkaEvents(ConsumerRecord<String, String> record) { log.info("Try to process message"); // Some code log.info("Processed value: " + record.value()); } } Execution Result: Shell Try to process message Processed value: first Try to process message Processed value: second Try to process message Processed value: third But what if an error happens during message processing? In that case, we need to handle it correctly. If this error is related to an invalid message, we can write to the log or place this message in a separate topic — DLT (dead letter topic) for further parsing of this message. And what if processing implies calling another microservice, but that microservice doesn't answer? In this case, we may need the retry mechanism. To implement it, we can configure DefaultErrorHandler: Java @Configuration @Slf4j public class KafkaConfiguration { @Bean public DefaultErrorHandler errorHandler() { BackOff fixedBackOff = new FixedBackOff(5000, 3); DefaultErrorHandler errorHandler = new DefaultErrorHandler((consumerRecord, exception) -> { log.error("Couldn't process message: {}; {}", consumerRecord.value().toString(), exception.toString()); }, fixedBackOff); errorHandler.addNotRetryableExceptions(NullPointerException.class); return errorHandler; } } Here we have specified that in case of an error, we will do retries (maximum three times) at intervals of five seconds. But if we have an NPE, we won't do iterations in that case but just write a message to the log and skip the message. But if we want more flexibility in error handling, we can do it manually: YAML spring: kafka: consumer: bootstrap-servers: localhost:9092 group-id: demo-group auto-offset-reset: earliest key-deserializer: org.apache.kafka.common.serialization.StringDeserializer value-deserializer: org.apache.kafka.common.serialization.StringDeserializer properties: enable.auto.commit: false listener: ack-mode: MANUAL Here we set spring.kafka.consumer.properties.enable.auto.commit=false (if true, the consumer's offset will be periodically committed in the background. In that case property auto.commit.interval.ms (default 5000ms will be used) and spring.kafka.listener.ack-mode=MANUAL, which means we want to control this mechanism ourselves. Now we can control the sending of the acknowledgment ourselves: Java @KafkaListener(topics = "demo", groupId = "demo-group") void processKafkaEvents(ConsumerRecord<String, String> record, Acknowledgment acknowledgment) { log.info("Try to process message"); try { //Some code log.info("Processed value: " + record.value()); acknowledgment.acknowledge(); } catch (SocketTimeoutException e) { log.error("Error while processing message. Try again later"); acknowledgment.nack(Duration.ofSeconds(5)); } catch (Exception e) { log.error("Error while processing message: {}" + record.value()); acknowledgment.acknowledge(); } } The Acknowledgment object allows you to explicitly acknowledge or reject (nack) the message. By calling acknowledge(), you are telling Kafka that the message has been successfully processed and can be committed. By calling nack(), you are telling Kafka that the message should be re-queued for processing after a specified delay (i.e., in a case when another microservice isn't responding). Conclusion Data loss prevention is critical for consumer Kafka applications. In this article, we looked at some best practices for exception handling and data loss prevention with Spring Boot. By following these practices, you can ensure that your application is more resilient to failures and can gracefully recover from errors without data loss. By applying these strategies, you can build a robust and reliable Kafka consumer application.
Nicolas Fränkel
Head of Developer Advocacy,
Api7
Shai Almog
OSS Hacker, Developer Advocate and Entrepreneur,
Codename One
Marco Behler
Ram Lakshmanan
yCrash - Chief Architect