Containers Trend Report. Explore the current state of containers, containerization strategies, and modernizing architecture.
Securing Your Software Supply Chain with JFrog and Azure. Leave with a roadmap for keeping your company and customers safe.
Also known as the build stage of the SDLC, coding focuses on the writing and programming of a system. The Zones in this category take a hands-on approach to equip developers with the knowledge about frameworks, tools, and languages that they can tailor to their own build needs.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.
Database Systems
Every modern application and organization collects data. With that, there is a constant demand for database systems to expand, scale, and take on more responsibilities. Database architectures have become more complex, and as a result, there are more implementation choices. An effective database management system allows for quick access to database queries, and an organization can efficiently make informed decisions. So how does one effectively scale a database system and not sacrifice its quality?Our Database Systems Trend Report offers answers to this question by providing industry insights into database management selection and evaluation criteria. It also explores database management patterns for microservices, relational database migration strategies, time series compression algorithms and their applications, advice for the best data governing practices, and more. The goal of this report is to set up organizations for scaling success.
React is a popular JavaScript framework that was developed by Facebook and has gained a massive following among developers due to its simplicity, modularity, and reusability. In this series of short “bloglets,” our team will cover a wide array of React topics, including developer tips, issues, and experiences. We are hopeful that everyone from the beginner to the seasoned professional will find something useful in each post. Using React Context By Jeff Schuman Developers new to React have a tendency to over-use property propagation to nested child components. The term for this is prop drilling. Prop drilling is generally frowned upon as it inhibits clean, reusable, and DRY code. One alternative to prop drilling is using the React Context API. Context allows for the sharing of data in a component hierarchy without passing the data down to each component through properties. This article will give you an overview of the Context API through an example. A typical use case for Context is application theming. For our example, we’ll give our user the ability to easily enlarge the text for each UI widget. Here’s what our final product looks like: By selecting the dropdown (currently set to ‘Normal’), the user can increase the size of the elements inside the box. Other options include ‘Enlarged’ and ‘Gigantic.’ Here’s a breakdown of the various React components discussed below: The first step in using React Context is to create the Context itself. It is generally a good idea to use a separate module to create the context so it can be reused. Here is our simple ThemeContext.js file: Note that we are using the React API createContext() function and initializing it with a null value. Later, we will ensure that the context has valid data. In our AppContainer component, we’ll create a state to capture the current theme. This state can be changed by manipulation of the dropdown. The theme state is defined and updated. The dropdown updates the theme state value. Next, we need to provide our context to a set of components. We do this by importing the context and then using its Provider to encapsulate the components that have access to the Context data. We import the ThemeContext from our context module and then use the ThemeContext.Provider component to wrap the components that have access to the context. NOTE that we set the value of the context to our theme state data by using the value prop of the Provider component. Let’s take a look at how we can now read the context data in a component within the Provider’s hierarchy. Our UserList component is fairly simple: Note that we are not using Context in any way in this component. It simply instantiates a list of User names and then creates a User component for each user. Here is the User component: In this component, we can see how to gain access to the Context data. We import the useContext hook AND the ThemeContext module. We invoke the useContext hook, passing in the Context that we created in the module, and the return value is the context data. In our example, we simply assign the context value as the className to our <span> content. Our ButtonPair and Button components work similarly: As does the StaticText component: ...add in some relatively simple css for each class: And putting it all together, you have a functioning application where modifying the theme affects all the visual elements within the Context Provider: In Summary The React Context API is one solution to prop drilling in React components. Start by creating a context object. Import the context object in your component and use its Provider component to establish the container of components (and their descendant components) that will have access to the context data. You’ll also want to set (and provide for modification) of your context data here. Wherever the context is needed, import the context object and use the useContext hook to gain access to the context data. Using the Context API helps keep your components clean and DRY, but there can be drawbacks: Component reusability can suffer when using the Context API. We’ve effectively created a dependency on the context data on any component within the hierarchy that uses that context. Attempting to use the component outside the Provider hierarchy is not advisable. There can be performance concerns with using the Context API in a complicated and deep hierarchy. As the context changes, this change is broadcast throughout the hierarchy regardless of whether the component is dependent on the change or not. An alternative to using Context is Component Composition — a topic for another time! I hope you have enjoyed this overview of the React Context API and an example of its use. It is a powerful feature that can help you write clean, DRY code and avoid the messiness of passing properties throughout your component hierarchy. _____________________________________________________________________________________ Controlled Components: The Key To Consistent React Forms By Jared Mohney Controlled components are a key concept of React. With their state fully controlled by their parent component, it allows for their data to remain in sync with the rest of our application. Without controlled components, we could find ourselves with multiple sources of truth, and that’s no good! Let’s quickly look at three simple examples we can run into when building forms: Input Example In this example, we have an input element that is fully controlled. Our parent component sets the initial state via useState and handles any updates to it thereafter via handleChange. When submitted, the parent component has access to the value of the input element and can do whatever it needs to with our pizza! Checkbox Now this feels familiar. Here we have a clean-cut example of a basic checkbox, reading and updating the state of its parent instead of managing its own internally. Want to learn more about the useCallback being used by handleChange? Check out our article on rendering React reliably! Select Our final example is a multi-select dropdown, and our approach is identical. We want to hijack the internal state management of these elements so that we can establish reliable data flow within our application. I hope it’s clear now: State control isn’t scary (or difficult)! TIP: If you find yourself tackling a larger form, reaching for libraries like Formik and React Hook Form can handle a lot of this boilerplate for you (and more). In summary, by controlling the state of our form components with React, we can ensure that our data is consistent. This is important in larger applications where there may be multiple components that access the same data. With controlled components, we can avoid inconsistencies and maintain a single source of truth! _____________________________________________________________________________________ React: Converting Class-Based Components Into Functional Ones. It's Not So Bad! By Adam Boudion Introduction If you’ve worked with React for any length of time, you’ll likely know that there are two different types of React components: Class and Functional. Class components are the older way and involve extending from the internal React class called Component. Functional components, the newer way, are simple JavaScript functions that return JSX. Let’s say you find yourself working in a codebase of class-based components, and to keep things closer to the leading edge, you decide you want to update some of these class-based components to functional ones. It can be overwhelming if you’re not familiar with functional components and hooks, but it’s really not as daunting as it first seems. Consider the following class component: Nothing fancy for the sake of simplicity, but enough to allow us to understand what needs doing in order to make this into a functional component. This component takes in a single prop, name, which will be used to personalize a welcome message. It also contains a button that will use the state to keep track of how many times you click it and display that in the browser. Finally, it will display some messages in the console when the component mounts and unmounts so that we can keep track of its lifecycle. Converting Props First, we’ll address the prop. In the class component, the props object is passed into the constructor on lines 4 and 5. The prop is then used within the component by getting its value off of this.props as seen on line 18 in the example above. Functional components are a bit different. Instead of props being passed in the constructor, they’re simply passed into the component method itself, as you would any other ES6 arrow function. At that point, you can simply reference the prop directly inside the component: It’s worth noting that this particular example is destructuring the props. One could easily write the above two lines of code like this, and they would still be functionally equivalent: Converting State Next, we need to talk about the differences in state. In the class component example, we can see the state being initialized in the constructor on line 7. A single state variable called count is declared and initialized with a value of zero. That state variable is then printed to the screen as part of line 19. Then, on line 20, the button click handler will increment the state variable by one and trigger a re-render via the setState method. In functional components, the state is handled using the relatively new useState hook, though the core reactive behavior remains the same. To initialize our count variable in our new functional component, we’ll need a line of code like this: The useState hook takes in a single argument, an initial value (zero, in this case), and returns two things. The first is a new state variable, called count, in this case. The second, a method called setCount which is used to change the value of this variable. So, instead of using the setState methods as we could in the class-based example, we need to call setCount with the new value we’d like to set it to. So, lines 19 and 20 from above would become something like this: Rendering Next, we need to take a look at the different ways HTML is rendered in functional components. In our class example above, we have our familiar render method on line 15, which is responsible for rendering the HTML of our component to the virtual DOM. In a functional component, there isn’t a render method. Instead, the return statement of the component itself contains the content we’d like to render like this: Lifecycle Considerations Finally, we need to look at how to translate our familiar React class lifecycle methods into the world of functional components and hooks. This one is a little tricky, but once you understand the functional equivalent, it will start to make sense. Typically, in class-based components like the one above, there exists the componentDidMount method, which is used to run code that we’d like to execute when the component is rendered for the very first time. Conversely, there is a componentWillUnmount method, which runs when the component is about to be removed from the virtual DOM. Functional components operate quite a bit differently in this regard, in part because they utilize something called the useEffecthook. Let’s walk through what that would look like. As you can see, this looks markedly different and maybe a little daunting. But it really isn’t! Let’s break it down. The useEffect hook runs every time a component re-renders. But wait a second! We just want this to run once when the component is initialized, not after every render. Well, that’s where this tricky little empty array on line 13 comes into play. This argument is optional, and when it’s omitted, it ensures that the useEffect hook will run every time the component renders without conditions. If the array is passed but is empty, as in our example, the effect will only run once when the component is first mounted. That’s the behavior we want for our example, but it’s worth talking about what happens when you actually pass something into that array. If we passed in our count variable, for example, then this effect would be skipped on every render except for the ones where the count has changed. This is powerful as it allows the developer to optimize and cut down on excessive re-renders. Then there’s the return statement where our code that was in componentWillUnmount is now. This is a cleanup method, and it is also optional. This runs whenever the component is unmounted, which is what we want in this case. It’s important to note here that it will also run right before the same useEffect is run again to clean up from the last one. But, since our array of dependencies is empty, it will only run once when first rendered. Therefore, the cleanup method will only run once when dismounted. Conclusion So, with all of that said, let’s take a look at the final product. This code is functionally equivalent (no pun intended) to the earlier class example and uses hooks to replace the old lifecycle methods. We all know that real-world class-based component are usually not this simple, but they can be updated to be functional components using the same techniques listed here. While both methods are viable options for creating components, functional components are widely considered to be the path forward by the overall React community, including the creators of React themselves. For this reason, they have committed themselves to maintain backward compatibility for class components to avoid forcing rewrites of established code but are focusing their attention on improving functional components going forward. Because of this, you may want to consider migrating at some point so you can take advantage of the fancy new features they’re adding now and in the future.
Mermaid is a trendy diagramming tool. A year ago, it was integrated into the Markdown rendering of GitHub. It is also integrated into several editors (see "Include diagrams in your Markdown files with Mermaid" for more information). What can you do, however, if you use a different editor? What if you want to use your Markdown document in an environment that does not integrate Mermaid yet? What can you do if the diagram is not Mermaid but PlantUML, Graphviz, or any other diagramming tool? This article will show how you can integrate any diagram-as-code tool into your documents. The technique works for Markdown, Asciidoc, APT, or any other text-based markup language. However, before anything else, here is a demonstration image, which was created the way I will describe in this article. Problem Statement When documenting some systems, it is often necessary to include diagrams. Keeping diagrams in separate files has advantages, but also disadvantages. It is easier to keep the consistency of the documentation when the different parts are close together. The more distanced the two corresponding and related parts are, the more likely that one or the other becomes stale when the other is updated. It is also a good idea if you can parameterize the diagram, and you could avoid copy-pasting diagram parameters from the document, the documented code, or the other way around. To solve these problems, more and more markup languages support selected diagramming tool markups to embed in the text. You can include Mermaid in Markdown documents if you target GitHub hosting for your document. You can include PlantUML diagrams in Asciidoc documents. What happens, however, if you want to include Mermaid in Asciidoc? What if you need PlantUML in Markdown? How do you solve the issue if you want to host your Markdown elsewhere besides GitHub? You can abandon your ideas, stick to the available tools, or wait for a solution. The latter approach, however, will always remain an issue. There will always be a new tool you want to use, and you will have to wait for the support of that tool in your favorite markup language. The principal reason for this is an architectural mismatch. Document markup languages must be responsible only for document structure and content and nothing else. Embedding a diagramming tool into the markup language must not be implemented in these languages. It is a separate concern with the document’s programmability ensuring document consistency automation. The solution is to use a meta markup above the document markup. This meta markup can be document markup agnostic and support all the diagramming tools you want to use. Ideas and Approach To Solve the Problem The basic idea is not new: separation of concerns. The document markup language should be responsible for the document structure and content. The diagramming tool should be responsible for the diagramming. The meta markup should be responsible for the integration. Since the meta markup is language agnostic, it can be used with any existing and future document markup languages. There is no need to wait for the support of the diagramming tool in the document markup language. The only question is the integration of the meta markup into the document markup language. The simplest and loosest way to integrate the meta markup is to use a preprocessor. Processing the meta markup, we read and generate a text file. The document markup processing tool catches where the meta markup has left off. It has no idea that a program generates the document markup and is not manually edited. Strictly speaking, when you edit a document markup, then the editor is the program that generates the file. Technically, there is no difference. There are other possibilities. Most document markups support different editors to deliver some form of WYSIWYG editing. The meta markup preprocessor can be integrated into these editors. That way, the document markup enriched with the meta markup can seamlessly be edited in the WYSIWYG editor. The proposed meta markup and the implementing tool, Jamal, follow both approaches. Suggested Solutions/Tools The suggested solution is to use Jamal as the meta markup. Jamal is a general-purpose macro processor. There are other meta-markup processing tools, like PyMdown. These tools usually target a specific document markup and a specific purpose. Jamal is a general-purpose, turning complete macro processor with more than 200 macros for different purposes. These macros make your documents programmable to automate manual document maintenance tasks. The general saying is, "If you could give a task to an assistant to do, then you can automate it with Jamal." Jamal has a PlantUML module. PlantUML is written in Java, the development language I used to create Jamal. It makes the integration of PlantUML into Jamal easy, and PlantUML diagrams embedded into the documentation can be converted in the process. Jamal, however, is not limited to using only tools written in Java. To demonstrate it, we will use the Mermaid diagramming tool, written in JavaScript and running with node. Since Mermaid is not a Java program, it cannot be executed inside the JVM. We will create our documentation to execute Mermaid as a separate process. Other diagramming tools can be integrated similarly if executed on the document processing machine. Install Mermaid The first step is to install Mermaid. The steps are documented on the Mermaid site. I will not repeat the steps here because I do not want to create a document that gets outdated. On my machine, the installation creates the /usr/local/bin/mmdc executable. This file is a JavaScript script that starts the Mermaid diagramming tool. While executing from Jamal, I realized the process has a different environment than my login script. To deal with that, I had to edit the file. Instead of using the env command to find the node interpreter, I specified the full path to the node executable. Other installations may be different, and it does not affect the rest of the article. It is a technical detail. Install Jamal We will use Jamal as the meta markup processor. The installation is detailed in the documentation of Jamal. You can start it from the command line, as a Maven plugin, using Jbang, and many other ways. I recommend using it as a preprocessor integrated into the IntelliJ Asciidoctor plugin. It will provide you with WYSIWYG editing of your document in Markdown and Asciidoc enriched with Jamal macros. Also, the installation is nothing more than executing the command line: mvn com.javax0.jamal:jamal-maven-plugin:2.1.0:jamalize This will download the version 2.1.0 we use in this article by the time pre-release and copy all the needed files into your project’s .asciidoctor/lib directory. It will make the macros available for the Asciidoctor plugin. What needs manual work is configuring IntelliJ to treat all .jam files as Asciidoc files. That way, the editor will invoke the Asciidoctor plugin and use the Jamal preprocessor. It is the setup that I also use to write the articles. Create the Macros for Mermaid To have a mermaid document inside the document, we should do three things using macros: Save the Mermaid text into a file. Execute the Mermaid tool to convert the text into an SVG file. Reference the SVG file as an image in the document. Later, we will see how to save on Mermaid processing, executing it only when the Mermaid text changes. We will use the io:write macro to save the Mermaid text into a file. This macro is in a package that is not loaded by default. We have to load it explicitly. To do that, we use the maven:load macro. {@maven:load com.javax0.jamal:jamal-io:2.1.0} Note This macro package has to be configured as safe for the document in the .jamal/settings.properties file as it is documented. The macros in this package can read and write files and execute commands configured. To use a macro package like that from an untrusted source is a security risk. For this reason, every package loaded by the maven:load macro has to be configured as safe. The configuration specifies the package and the documents where it can be used. At the same time, the io package also needs configuration to be able to execute the mmdc command. To do that, the configuration file contains a line assigning a symbolic name to the command as follows: mermaid=/usr/local/bin/mmdc The io:exec macro will use this symbolic name to find the command to execute. When the macro package is loaded, we can use the io:write macro as in the following sample: {#define CHART=flowchart TD A[Christmas] -->|Get money| B(Go shopping) B --> C{Let me think} C -->|One| D[Laptop] C -->|Two| E[iPhone] C -->|Three| F[fa:fa-car Care] } {#io:write (output="aaa.mmd") {CHART} When the file is created, we can execute the Mermaid tool to convert it into an SVG file, as the following: {#io:exec command="""mermaid -i aaa.mmd -o aaa.svg """ cwd="." error="convert.err.txt" output="convert.out.txt" } By that, we have the file. Whenever the Mermaid text changes, the SVG file will be recreated. As a matter of fact, whenever the document changes, the SVG file will be recreated. It wastes resources when the diagram remains the same and the processing runs interactively. To help with that, we can use the hashCode macro. The macro hashCode calculates the hash code of the text. We will use the hash code to name the file. Whenever the diagram changes, the file’s name changes. Also, if the file exists, it should contain the diagram for the current text. To check that the file exists, we include it in the document. Because we do not want the SVG text in the document, we surround the include with the block macro. If the file does not exist, then an error will occur. The macro try will handle this error, and the execution will continue. However, the macro CREATE will be set to true in this case. If there is no error when the file already exists, the macro CREATE will be set to false. The if macro will check the value of the macro CREATE. If it is true, it will execute the io:write and io:exec macros to create the file. If it is false, then it will do nothing. {#define KEY={#hashCode {CHART}}{@define CREATE=true} {@try {#block{#include images/{KEY}.svg}{@define CREATE=false} {#if `//`{CREATE}// {#io:write (mkdir output="images/aaa.mmd") {CHART} {#io:exec command="""mermaid -i images/aaa.mmd -o images/{KEY}.svg """ cwd="." error="convert.err.txt" output="convert.out.txt" }//} Summary and Takeaway This article discussed integrating Mermaid diagrams into your Asciidoc, Markdown, or any other markup document. We selected Mermaid for two reasons. First, usually, this is the tool people ask for. Second, this is an excellent example of a non-Java tool that can be integrated into document processing. The described way can be applied to any external tool capable of running as a process. The samples also demonstrate a complex structure of macros showing the power of the Jamal macro processor. Such complexity is rarely needed. In addition to the technology, I discussed, though only briefly, the separation of concerns for document handling and how the document formatting markup should be separated from the processing meta markup. If you want diagrams in your documentation, download Jamal and enhance your documents.
Java collection framework provides a variety of classes and interfaces, such as lists, sets, queues, and maps, for managing and storing collections of related objects. In this blog, we go over effective Java collection framework: best practices and tips. What Is a Collection Framework? The Java collection framework is a key element of Java programming. To effectively use the Java collection framework, consider factors like utilizing the enhanced for loop, generics, avoiding raw types, and selecting the right collection. Choosing the Right Collection for the Task Each collection class has its own distinct set of qualities and is made to be used for a particular function. Following are some descriptions of each kind of collection: List: The ArrayList class is the most widely used list implementation in Java, providing resizable arrays when it is unknown how large the collection will be. Set: The HashSet class is the most popular implementation of a set in Java, providing uniqueness with a hash-table-based implementation. Queue: The LinkedList class is the most popular Java implementation of a queue, allowing elements to be accessed in a specific order. Map: The HashMap class of Java is the most popular map implementation for storing and retrieving data based on distinct keys. Factors to Consider While Choosing a Collection Type of data: Different collections may be more suitable depending on the kind of data that will be handled and stored. Ordering: A list or queue is preferable to a set or map when arranging important items. Duplicate elements: A set or map may be a better option than a list or queue if duplicate elements are not allowed. Performance: The characteristics of performance differences between different collections. By picking the right collection, you can improve the performance of your code. Examples of Use Cases for Different Collections Lists: Lists allow for the storage and modification of ordered data, such as a to-do list or shopping list. Set: A set can be used to create unique items, such as email addresses. Queue: A queue can be used to access elements in a specific order, such as handling jobs in the order they are received. Map: A map can be used to store and access data based on unique keys, such as user preferences. Selecting the right collection for a Java application is essential, taking into account data type, ordering, duplicate elements, and performance requirements. This will increase code effectiveness and efficiency. Using the Correct Methods and Interfaces In this section, the various methods and interfaces that the collection framework provides will be covered, along with some tips on how to effectively use them. Choosing the Right Collection: The collection framework provides a variety of collection types to improve code speed and readability, such as lists, sets, queues, maps, and deques. Using Iterators: Iterators are crucial for browsing through collections, but if modified, they can quickly break down and throw a ConcurrentModificationException. Use a copy-on-write array list or concurrent hash map to stop this. Using Lambda Expressions: Lambda expressions in Java 8 allow programmers to write code that can be used as an argument to a method and can be combined with the filter() and map() methods of the Stream API to process collections. Using the Stream API: The Stream API is a powerful feature in Java 8 that enables functional collection processing, parallelizable and lazy, resulting in better performance. Using Generics: Generics are a powerful feature introduced in Java 5 that allows you to write type-safe code. They are especially useful when working with collections, as they allow you to specify the types of elements that a collection can contain. To use generics, it is important to use the wildcard operator. The Java collection framework provides methods and interfaces to improve code efficiency, readability, and maintainability. Iterators, Lambda expressions, Stream API, and generics can be used to improve performance and avoid common pitfalls. Best Practices for Collection Usage In this section, we will explore some important best practices for collection usage. Proper Initialization and Declaration of Collections Collections should be initialized correctly before use to avoid null pointer exceptions. Use the appropriate interface or class to declare the collection for uniqueness or order. Using Generics to Ensure Type Safety Generics provide type safety by allowing us to specify the type of objects that can be stored in a collection, allowing us to catch type mismatch errors at compile time. When declaring a collection, specify the type using angle brackets (<>). For example, List<String> ensures that only String objects can be added to the list. Employing the Appropriate Interfaces for Flexibility The Java collection framework provides a variety of interfaces, allowing us to easily switch implementations and take advantage of polymorphism to write code that is more modular and reusable. Understanding the Behavior of Different Collection Methods It is important to understand the behavior of collection methods to use them effectively. To gain a thorough understanding, consult Java documentation or reliable sources. Understanding the complexities of operations like contains() and remove() can make a difference in code performance. Handling Null Values and Empty Collections To prevent unexpected errors or undesirable behavior, it's crucial to handle null values and empty collections properly. Check that collections are not null and have the required data to prevent errors. Memory and Performance Optimization In this section, we will explore techniques and best optimize to optimize memory utilization and enhance the performance of collections in Java as follows: 1. Minimizing the Memory Footprint With the Right Collection Implementation Memory usage can be significantly decreased by selecting the best collection implementation for the job. When frequent random access is required, for instance, using an array list rather than a linked list can reduce memory overhead. 2. Efficient Iteration Over Collections It is common practice to iterate over collections, so picking the most effective iteration strategy is crucial. In comparison to conventional loops, using iterator-based loops or enhanced for-each loops can offer better performance. 3. Considering Alternative Collection Libraries for Specific Use Cases The Java collection framework offers a wide range of collection types, but in some cases, alternative libraries like Guava or Apache commons-collections can provide additional features and better performance for specific use cases. 4. Utilizing Parallel Processing With Collections for Improved Performance With the advent of multi-core processors, leveraging parallel processing techniques can enhance the performance of operations performed on large collections. The Java Stream API provides support for parallel execution, allowing for efficient processing of data in parallel. Tips and Tricks for Effective Collection Usage Using the Right Data Structures for Specific Tasks The right data structure must be chosen for the task at hand, with advantages and disadvantages, to make wise decisions and improve performance. Making Use of Utility Methods in the Collections Class The collections class in Java provides utility methods to simplify and streamline collection operations, such as sorting, searching, shuffling, and reversing. Leveraging Third-Party Libraries and Frameworks for Enhanced Functionality The Java collection framework provides a wide range of data structures, but third-party libraries and frameworks can provide more advanced features and unique data structures. These libraries can boost productivity, give access to more powerful collection options, and address use cases that the built-in Java collections cannot. Optimizing Collections for Specific Use Cases Immutable collections offer better thread safety and can be shared without defensive copying. Dynamic collections can be used to prevent frequent resizing and enhance performance. Specialized collections like HashSet or TreeMap can improve efficiency for unique or sorted elements. Optimise collections to improve performance, readability, and maintainability. Conclusion In this blog post, we have covered some effective Java collection frameworks with the best practices and tips. To sum up, the Java collection framework is a crucial component of Java programming. You can use the collection framework effectively and create more effective, maintainable code by adhering to these best practices and advice.
Show of hands, how many of us truly understand how your build automation tool builds its dependency tree? Now, lower your hand if you understand because you work on building automation tools. Thought so! One frustrating responsibility of software engineers is understanding your project's dependencies: what transitive dependencies were brought in and by whom; why v1.3.1 is used when v1.2.10 was declared; what resulted when the transitive dependencies changed; how did multiple versions of the same artifact occur? Every software engineer has piped a dependency tree into a text file, searched for specific artifacts, and then worked their way back up to identify its origin. For anything other than trivial projects, creating a mental map of the dependencies is extremely difficult, if not impossible. I faced this problem when starting a new job with a mature code base, presenting a challenge to assemble the puzzle pieces. I've previously worked with graph databases and thought a graphical view of the dependency artifacts could be created using Neo4J, which resulted in DependencyLoader. Note: this is not a tutorial on graph databases, nor does this tutorial require a background in graph databases. If interested, Neo4J has tutorials and white papers to help you get started. Set Up Environment Install Java Java 11 or later is required. If not already available, install your favorite OpenJDK flavor. Install Neo4J The tutorial requires a Neo4J database into which the dependency information is loaded, preferably unshared, as the loader purges the database before each run. You have been warned! Neo4J provides personal sandboxes, ideal for short-term projects like this tutorial. Alternatively, install Neo4J locally on your desktop or laptop. Homebrew simplifies MacOS installations: Shell brew install neo4j && brew services start neo4j Before continuing, confirm access to your Neo4J database using the browser, using either the link and credentials for the Neo4J sandbox or locally at http://localhost:7474. The default credentials for a local install is neo4j/neo4j; upon successful login, you are forced to change the password. Clone Repositories The neo4j-gradle-dependencies repository contains the for loading the dependencies into Neo4J. This tutorial will generate a dependency graph for spring-boot. You must clone these two repositories. Shell Scott.Sosna@mymachine src% git clone git@github.com:scsosna99/neo4j-gradle-dependencies.git Scott.Sosna@mymachine src% git clone git@github.com:spring-projects/spring-boot.git Note: local Gradle is not required as both repositories use the Gradle Wrapper, which downloads all necessary components the first time the wrapper is used. Generate Dependencies DependencyLoader takes the dependency tree generated by Gradle as input. Though multiple configurations may be loaded together — i.e., compileClasspath, runtimeClasspath, testCompileClasspath, testRuntimeClasspath — starting with a single configuration is simpler to navigate, especially for a tutorial. To generate dependencies for all configurations: gradle dependencies ./gradlew dependencies To generate dependencies for a single configuration gradle dependencies --configuration <configuration> ./gradlew dependencies --configuration <configuration> Generate Spring Boot Dependencies This tutorial creates a dependency graph in Neo4J using the compileClasspath dependencies of Spring Boot. From the directory where the repositories were cloned, execute the following commands: Shell Scott.Sosna@mymachine src% cd spring-boot/spring-boot-project/spring-boot Scott.Sosna@mymachine spring-boot% ./gradlew dependencies --configuration compileClasspath > dependencies.out The file dependencies.out contains the compile-time classpath dependencies for Spring Boot. Load Dependencies First, confirm the connection URL and authentication credentials in DependencyLoader.java and modify them if necessary. Execute the following commands to load the Spring Boot dependencies into Neo4j: Shell Scott.Sosna@mymachine spring-boot% cd ../../../neo4j-gradle-dependencies Scott.Sosna@mymachine neo4j-gradle-dependencies% ./gradlew clean run --args="../spring-boot/spring-boot-project/spring-boot/dependencies.out" When successful, the output lines from gradle are: Shell Scott.Sosna@PVHY32M6KG neo4j-gradle-dependencies % ./gradlew clean run --args="../spring-boot/spring-boot-project/spring-boot/dependencies.out" > Task :compileJava Note: /Users/Scott.Sosna/data/src/github/neo4j-gradle-dependencies/src/main/java/dev/scottsosna/neo4j/gradle/relationship/DependsOn.java uses unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. > Task :run SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Jun 02, 2023 6:19:22 AM org.neo4j.driver.internal.logging.JULogger info INFO: Direct driver instance 1606286799 created for server address localhost:7687 dependencies.out completed. Jun 02, 2023 6:19:23 AM org.neo4j.driver.internal.logging.JULogger info INFO: Closing driver instance 1606286799 Jun 02, 2023 6:19:23 AM org.neo4j.driver.internal.logging.JULogger info INFO: Closing connection pool towards localhost:7687 Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0. You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins. See https://docs.gradle.org/7.5.1/userguide/command_line_interface.html#sec:command_line_warnings BUILD SUCCESSFUL in 3s View Dependencies Multiple tools are available for displaying Neo4J graphs, but the built-in browser tool is adequate for this tutorial. Show the Complete Tree The query MATCH(a) RETURN a is the relational-equivalent of SELECT * FROM <table> View Details of an Artifact Each artifact found creates a node whose properties identify the artifact (groupId/artifactId) and its type, shown on the right-side pane. View Details of a Dependency Each dependency is created as a relationship whose properties identify the specifics of the dependency: configuration, specified/version, and configuration. The dependency selected below shows spring-security:spring-web depends on io.micormeter:micrometer-observation, but the spring-web specified version 1.10.7 was resolved as version 1.11.0. Traverse Dependencies Neo4J allows you to explore the graph node-by-node, allowing you to manually expand the graph node by node, providing a way to explore specific areas of the dependency tree. Assume that you want to understand the dependencies for the artifact io.projectreactor.netty:reactor-netty-http. First, we'll query Neo4J for that specific node. Cypher MATCH(a:Artifact {groupId: 'io.projectreactor.netty', artifactId: 'reactor-netty-http'}) RETURN a Double-clicking on the node shows its neighboring nodes — the artifact(s) depended on it and the artifact(s) it depends on. This expanded graph shows one artifact that is dependent on it — the root of the project with an artifact type PROJECT and six other dependencies on which it's dependent. Next, double-click on io.netty:netty-code https://github.com/netty/netty/tree/4.1/codec-httpc-http to show the next level of dependencies. Note that besides the relationships (dependencies) of the selected node, additional relationships for nodes already on the graph may be shown. Identify Version Mismatch Gradle's dependency output indicates where the specified version was not the version resolved by Gradle. The properties on the dependency (relationship) can be used in a Neo4J query, restricting the relationships shown and the attached artifacts (nodes). Cypher MATCH (a:Artifact)-[d:DEPENDS_ON]->(b:Artifact) WHERE d.specifiedVersion<>d.resolvedVersion RETURN a,b,d Neo4J can return results in a tabular format for easier review, if necessary. Cypher MATCH (a:Artifact)-[d:DEPENDS_ON]->(b:Artifact) WHERE d.specifiedVersion<>d.resolvedVersion RETURNa.name AS source, b.name AS dependency, d.specifiedVersion AS specified, d.resolvedVersion AS resolved Additional Information mappings.out The mappings.out file allows you to customize the artifact type assigned to a node based on artifacts groupId, most commonly to specifically identify artifacts created by your organization. Input Directory The command line argument for DependencyLoader may be a directory containing multiple Gradle dependency trees loaded into the same Neo4J database. This helps in understanding the dependencies of related projects with separate build.gradle files. Constrained and Omitted Gradle identifies certain dependencies as Constrained and Omitted. Currently, those are not loaded but would be easy to include, likely by creating additional properties for the relationships.
This is the third and final post about my OCP-17 preparation. In part one, I explained how playing a human virtual machine and refreshing your mastery of arcane constructs is not pointless, even if the OCP doesn’t — and doesn’t claim to — make you a competent developer. In the second part, I showed you how intrinsic motivation keeps itself going without carrots or sticks, provided you can find ways to make your practice fun and memorable. It's time to share some of these examples and tips. Make It Quality Time But first some advice about logistics and time management. As with physical exercise, short and frequent trumps long and sporadic. It’s more effective and more likely to become a habit, like brushing your teeth. Choose the time of day when you are most energetic and productive. The early morning works best for me because I’m a morning person. And there is a satisfaction in getting the daily dose crossed off your to-do list, even when it doesn’t feel like a chore. Make a good balance between reading, practicing, and revising. Once you’ve worked through the entire textbook you will need to refresh much of the first few chapters. That’s okay. Keep revising them, doing a few questions from each chapter each day. You’ll get there slowly, but surely. Make It Practical and Productive Practice in the previous paragraph means writing novel code aimed to teach yourself a certain language construct. It’s about your productivity, so copying snippets from the book doesn’t count. If you’ve ever learned a foreign language the old-fashioned way, you will agree that cramming vocabulary and grammar rules does little for your oral skills. Only speaking can make you fluent, preferably among native speakers. It’s like swimming or playing the saxophone: you can’t learn it from a book. Never used the NIO2 API or primitive streams? Never done a comparison or binary search of arrays? Get your feet wet, preferably with autocomplete turned off. Better even, scribble in a plaintext editor and paste it into your IDE when you’re done. Understand the Why While Java shows its age, its evolution is managed carefully so new additions don’t feel as if they were haphazardly tacked on. Decisions take long to mature and were made for a reason. When the book doesn’t explain the reasoning behind a certain API peculiarity, try to explain it to yourself instead of parroting a rule. Here’s a case in point from the concurrency API. The submit() method of an executor has two overloaded versions for a Runnable or Callable argument. It returns a Future. The void execute() method only takes a Runnable, not a Callable. Why does that make good sense? Well, a Callable yields a value and can throw an Exception. Since execute() acts in a fire-and-forget fashion, the result of a Callable would be inaccessible, so it’s not supported. Conversely, submitting a Runnable with a void result is fine. Its Future returns null. The memory athletes from my previous post, who memorized random stacks of cards, have it much tougher than you and I. Learning Java is about memorizing a lot of facts, but they’re not random. Making a Visual Story The ancient Greeks taught us how to construct mental memory palaces to store random facts for easy retrieval. Joshua Foer added a moonwalking Albert Einstein to jog his memory. You should make your code samples equally fun and memorable. Here’s how to illustrate the fundamental differences between an ArrayList and a LinkedList. Imagine a movie theater with a fixed number of seats (the ArrayList) and a line of patrons (the LinkedList) at the ticket booth, who receive a numbered ticket. People arrive (offer(..) or add(..)) at the tail of the queue irregularly while every ten seconds the first person in the queue can enter the theater (poll(), element()) and is shown to their seat (seats.set(number, patron). Let’s add concurrency to the mix. Suppose there are two ticket booths, each with its own line, and a central ticket dispenser that increments a number. That’s right: getAndIncrement() in AtomicInteger to the rescue. I’d happily show you the code, but that wouldn’t teach you much. Or take access rights in class hierarchies. Subtypes may not impose stricter access rights or declare broader or new checked exceptions. Let’s put it less academically. Imagine a high rise with multiple company offices (classes) and several floors (packages). Private access is limited to employees of one company. Package access extends to offices on the same floor. Public access is everybody: other floors as well as external visitors. The proprietor provides a public bathroom that clearly shows when it’s occupied. You can dress it up with scented towels and music through a subclass, but you must obey this contract: public void visitRestRoom(Person p) throws OccupiedException { .. } Every outside visitor is welcome to use it. You are not allowed to restrict access to only employees on your floor (package access), much less your own employees (private access). Neither may you bother visitors with a PaymentExpectedException. It violates the contract. Code samples in the exam are meant to confuse you. Your own examples should do the exact opposite. You use real-life examples (a public office restroom, the queue outside a movie theater) and combine them in a way that is easy to visualize and fun to remember. Mnemonics Sometimes there’s nothing for it but to commit stuff to memory, like the types you can use as a switch variable (byte, int, char, short, String, enum, var). You can string them together in a mnemonic like this one: In one short intense soundbite, the stringy character enumerated the seven variables for switch. Or how about the methods that operate on the front of a queue (element, push, peek, pop, poll, and remove)? Elmer pushed to the front of the queue to get a peek at the pop star, but he was pulled out and removed. Yes, it’s far-fetched, silly, and outlandish. That’s what makes them memorable. To me at least. Or try your hand at light verse. The educational benefit may not be as strong for you as a reader, but the time I spent crafting it made sure I won’t quickly confuse Comparable and Comparator again. The Incomparable Sonnet You implement Comparable to sort(in java.lang: no reason to import).CompareTo runs through items with the aimto see if they are different or the same. If it returns a positive, it meantthat this was greater than the argument.For smaller ones a minus is supplied,a zero means the same, or “can’t decide”. Comparator looks similar, but bearin mind its logic is more self-contained.It has a single method called comparewhere difference of two args is ascertained.A range of default methods supplementyour lambdas. Chain them to your heart’s content! Some Closing Thoughts The aim of your practice is not to pass the exam as quickly as possible (or at all). It’s to become a more competent developer and have fun studying. I mentioned that there is some merit in playing human compiler, but that doesn’t mean that I fully agree with the OCP’s line of questioning in its current form and the emphasis on API details. Being able to write code from scratch with only your limited memory to save you is not a must-have skill for a developer in the coming decade. She will need to acquire new skills to counter the relentless progress of AI in the field. If I needed to assess you as a new joiner to our team, and you showed me a 90% OCP passing grade, I’d be seriously impressed and a little jealous, but I will still not be convinced that you’re a great developer until I see some of your work. You could still be terrible. And you can be a competent programmer and fail the exam. That’s where the OCP is so different from, say, a driving test. If you’re a bad driver you should not get a license, no exceptions. And if you fail the test, you’re not a great driver. Full disclosure: it took me four tries. If the original C language was a portable toolbox and Java 1.1 a toolshed, then Java 17 SE is a warehouse with many advanced power tools. The great thing is that you don’t have to wonder what all the buttons do. The instructions are clearly printed on the tools themselves through autocomplete and Javadoc. It makes sense to know what tools the warehouse stocks and when you should use them. But learning the instructions by heart? I can think of a better use of my time, energy, and memory.
The ReactAndGo project is used to compare a single page application frontend based on React and a Rest backend based on Go to Angular frontends and Spring Boot/Java backends. The goal of the project is to send out notifications to car drivers if the gas price falls below their target price. The gas prices are imported from a provider via MQTT messaging and stored in the database. For development, two test messages are provided that are sent to an Apache Artemis server to be processed in the project. The Apache Artemis server can be run as a Docker image, and the commands to download and run the image can be found in the 'docker-artemis.sh' file. As a database, Postgresql is used, and it can be run as a Docker image too. The commands can be found in the 'docker-postgres.sh' file. Architecture The system architecture looks like this: The React frontend uses the Rest interface that the Gin framework provides to communicate with the backend. The Apache Artemis Messaging Server is used in development to receive and send back the gas price test messages that are handled with the Paho-MQTT library. In production, the provider sends the MQTT messages. The Gorm framework is used to store the data in Postgresql. A push notification display is used to show the notification from the frontend if the target prices are reached. The open-source projects using Go have more of a domain-driven architecture that splits the code for each domain into packages. For the ReactAndGo project, the domain-driven architecture is combined with a layered architecture to structure the code. The common BaseController is needed to manage the routes and security of the application. The architecture is split between the gas station domain, the push notification domain, and the application user domain. The Rest request and response handling is in its own layer that includes the Rest client for the gas station import. The service layer contains the logic, database access, and other helper functions. Domain-independent functions like Cron Jobs, Jwt token handling, and message handling are implemented in separate packages that are in a utility role. Notifications From React Frontend to Go/Gin/Gorm Backend The ReactAndGo project is used to show how to display notifications with periodic requests to the backend and how to process rest requests in the backend in controllers and repositories. React Frontend In the front end, a dedicated worker is started after login that manages the notifications. The initWebWorker(...) function of the LoginModal.tsx starts the worker and handles the tokens: TypeScript-JSX const initWebWorker = async (userResponse: UserResponse) => { let result = null; if (!globalWebWorkerRefState) { const worker = new Worker(new URL('../webpush/dedicated-worker.js', import.meta.url)); if (!!worker) { worker.addEventListener('message', (event: MessageEvent) => { //console.log(event.data); if (!!event?.data?.Token && event?.data.Token?.length > 10) { setGlobalJwtToken(event.data.Token); } }); worker.postMessage({ jwtToken: userResponse.Token, newNotificationUrl: `/usernotification/new/${userResponse.Uuid}` } as MsgData); setGlobalWebWorkerRefState(worker); result = worker; } } else { globalWebWorkerRefState.postMessage({ jwtToken: userResponse.Token, newNotificationUrl: `/usernotification/new/${userResponse.Uuid}` } as MsgData); result = globalWebWorkerRefState; } return result; }; The React frontend uses the Recoil library for state management and checks if the globalWebWorkerRefState exists. If not, the worker in dedicated-worker.js gets created and the event listener for the Jwt tokens is created. The Jwt token is stored in a Recoil state to be used in all rest requests. Then the postMessage(...) method of the worker is called to start the requests for the notifications. Then the worker is stored in the globalWebWorkerRefState and the worker is returned. The worker is developed in the dedicated-worker.ts file. The worker is needed as .js file. To have the help of Typescript, the worker is developed in Typescript and then turned into Javascript in the Typescript Playground. That saves a lot of time for me. The refreshToken(...) function of the worker refreshes the Jwt tokens: TypeScript-JSX interface UserResponse { Token?: string Message?: string } let jwtToken = ''; let tokenIntervalRef: ReturnType<typeof setInterval>; const refreshToken = (myToken: string) => { if (!!tokenIntervalRef) { clearInterval(tokenIntervalRef); } jwtToken = myToken; if (!!jwtToken && jwtToken.length > 10) { tokenIntervalRef = setInterval(() => { const requestOptions = { method: 'GET', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${jwtToken}` }, }; fetch('/appuser/refreshtoken', requestOptions).then(response => response.json() as UserResponse) .then(result => { if ((!result.Message && !!result.Token && result.Token.length > 10)) { //console.log('Token refreshed.'); jwtToken = result.Token; /* eslint-disable-next-line no-restricted-globals */ self.postMessage(result); } else { jwtToken = ''; clearInterval(tokenIntervalRef); } }); }, 45000); } } The refreshToken(...) function first checks if another token interval has been started and stops it. Then the token is assigned and checked. If it passes the check a new interval is started to refresh the token every 45 seconds. The requestOptions are created with the token in the Authorization header field. Then the new token is retrieved with fetch(...) , and the response is checked, the token is set, and it is posted to the EventListener in the LoginModal.tsx. If the Jwt token has not been received, the interval is stopped, and the jwtToken is set to an empty string. The Eventlistener of the worker receives the token message and processes it as follows: TypeScript-JSX interface MsgData { jwtToken: string; newNotificationUrl: string; } let notificationIntervalRef: ReturnType<typeof setInterval>; /* eslint-disable-next-line no-restricted-globals */ self.addEventListener('message', (event: MessageEvent) => { const msgData = event.data as MsgData; refreshToken(msgData.jwtToken); if (!!notificationIntervalRef) { clearInterval(notificationIntervalRef); } notificationIntervalRef = setInterval(() => { if (!jwtToken) { clearInterval(notificationIntervalRef); } const requestOptions = { method: 'GET', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${jwtToken}` }, }; /* eslint-disable-next-line no-restricted-globals */ self.fetch(msgData.newNotificationUrl, requestOptions).then(result => result.json()).then(resultJson => { if (!!resultJson && resultJson?.length > 0) { /* eslint-disable-next-line no-restricted-globals */ self.postMessage(resultJson); //Notification //console.log(Notification.permission); if (Notification.permission === 'granted') { if(resultJson?.length > 0 && resultJson[0]?.Message?.length > 1 && resultJson[0]?.Title?.length > 1) { for(let value of resultJson) { new Notification(value?.Title, {body: value?.Message}); } } } } }); }, 60000); }); The addEventListener(...) method handles the MessageEvent messages with the MsgData. The jwtToken of the MsgData is used to start the refreshToken(...) function. Then it is checked to see if a notification interval has been started, and if so, it is stopped. Then a new interval is created that checks for new target-matching gas prices every 60 seconds. The jwtToken is checked, and if the check fails, the interval is stopped. Then the requestOptions are created with the Jwt token in the Authorization header field. Then fetch(...) is used to retrieve the new matching gas price updates. Then the result JSON is checked and posted back to the EventListener in the LoginModal.tsx. With Notification.permission the user gets asked for permission to send notifications, and granted means he agreed. The data for the notification is checked, and the notification is sent with new Notification(...). Backend To handle the frontend requests, the Go backend uses the Gin framework. The Gin framework provides the needed functions to handle Rest requests, like a router, context (url related stuff), TLS support, and JSON handling. The route is defined in the basecontroller.go: Go func Start(embeddedFiles fs.FS) { router := gin.Default() ... router.GET("/usernotification/new/:useruuid", token.CheckToken, getNewUserNotifications) ... router.GET("/usernotification/current/:useruuid", token.CheckToken, getCurrentUserNotifications) router.StaticFS("/public", http.FS(embeddedFiles)) router.NoRoute(func(c *gin.Context) { c.Redirect(http.StatusTemporaryRedirect, "/public") }) absolutePathKeyFile := strings.TrimSpace(os.Getenv("ABSOLUTE_PATH_KEY_FILE")) absolutePathCertFile := strings.TrimSpace(os.Getenv("ABSOLUTE_PATH_CERT_FILE")) myPort := strings.TrimSpace(os.Getenv("PORT")) if len(absolutePathCertFile) < 2 || len(absolutePathKeyFile) < 2 || len(myPort) < 2 { router.Run() // listen and serve on 0.0.0.0:3000 } else { log.Fatal(router.RunTLS(":"+myPort, absolutePathCertFile, absolutePathKeyFile)) } } The Start function gets the embedded files for the /public directory with the static frontend files. The line: Go router.GET("/usernotification/new/:useruuid", token.CheckToken, getNewUserNotifications) Creates the route /usernotification/new/:useruuid with the useruuid as parameter. The CheckToken function in the token.go file handles the Jwt Token validation. The getNewUserNotifications function in the in the uncontroller.go handles the requests. The getNewUserNotifications(...) function: Go func getNewUserNotifications(c *gin.Context) { userUuid := c.Param("useruuid") myNotifications := notification.LoadNotifications(userUuid, true) c.JSON(http.StatusOK, mapToUnResponses(myNotifications)) } ... func mapToUnResponses(myNotifications []unmodel.UserNotification) []unbody.UnResponse { var unResponses []unbody.UnResponse for _, myNotification := range myNotifications { unResponse := unbody.UnResponse{ Timestamp: myNotification.Timestamp, UserUuid: myNotification.UserUuid, Title: myNotification.Title, Message: myNotification.Message, DataJson: myNotification.DataJson, } unResponses = append(unResponses, unResponse) } return unResponses } The getNewUserNotifications(…) function uses the Gin context to get the path parameter useruuid and then calls the LoadNotifications(…) function of the repository with it. The result is turned into UserNotifications with the mapToUnResponses(…) function, which sends only the data needed by the frontend. The Gin context is used to return the HTTP status OK and to marshal the UserNotifications to JSON. The function LoadNotifications(...) is in the unrepo.go file and loads the notifications from the database with the Gorm framework: Go func LoadNotifications(userUuid string, newNotifications bool) []unmodel.UserNotification { var userNotifications []unmodel.UserNotification if newNotifications { database.DB.Transaction(func(tx *gorm.DB) error { tx.Where("user_uuid = ? and notification_send = ?", userUuid, !newNotifications) .Order("timestamp desc").Find(&userNotifications) for _, userNotification := range userNotifications { userNotification.NotificationSend = true tx.Save(&userNotification) } return nil }) } else { database.DB.Transaction(func(tx *gorm.DB) error { tx.Where("user_uuid = ?", userUuid).Order("timestamp desc").Find(&userNotifications) var myUserNotifications []unmodel.UserNotification for index, userNotification := range userNotifications { if index < 10 { myUserNotifications = append(myUserNotifications, userNotification) continue } tx.Delete(&userNotification) } userNotifications = myUserNotifications return nil }) } return userNotifications } The LoadNotifications(...) function checks if only new notifications are requested. Then a database transaction is created, and the new UserNotifications (notification.go) of the user file are selected, ordered newest first. The send flag is set to true to mark them as no longer new, and the UserNotifications are saved to the database. The transaction is then closed, and the notifications are returned. If the current notifications are requested, a database transaction is opened, and the UserNotifications of the user are selected, ordered newest first. The first 10 notifications of the user are appended to the myUserNotification slice, and the others are deleted from the database. Then the transaction is closed, and the notifications are returned. Conclusion This is the first React frontend for me, and I share my experience developing this frontend. React is a much smaller library than the Angular Framework and needs more extra libraries like Recoil for state management. Features like interval are included in the Angular RxJs library. React has much fewer features and needs more additional libraries to achieve the same result. Angular is better for use cases where the frontend needs more than basic features. React has the advantage of simple frontends. A React frontend that grows to medium size will need more design and architecture work to be comparable to an Angular solution and might take more effort during development due to its less opinionated design. My impression is: React is the kitplane that you have to assemble yourself. Angular is the plane that is rolled out of the factory. The Go/Gin/Gorm backend works well. The Go language is much simpler than Java and makes reading it fast. Go can be learned in a relatively short amount of time and has strict types and a multi-threading concept that Project Loom tries to add to Java. The Gin framework offers the features needed to develop the controllers and can be compared to the Spring Boot framework in features and ease of development. The Gorm framework offers the features needed to develop the repositories for database access and management and can be compared to the Spring Boot framework in terms of features and ease of development. The selling point of Go is its lower memory consumption and fast startup because it compiles to a binary and does not need a virtual machine. Go and Java have garbage collection. Java can catch up with Project Graal on startup time, but the medium- to large-sized examples have to be available and analyzed first for memory consumption. A decision can be based on developer skills, the amount of memory saved, and the expected future of Project Graal.
The console.log function — the poor man’s debugger — is every JavaScript developer’s best friend. We use it to verify that a certain piece of code was executed or to check the state of the application at a given point in time. We may also use console.warn to send warning messages or console.error to explain what happened when things have gone wrong. Logging makes it easy to debug your app during local development. But what about debugging your Node.js app while it’s running in a hosted cloud environment? The logs are kept on the server, to which you may or may not have access. How do you view your logs then? Most companies use application performance monitoring tools and observability tools for better visibility into their hosted apps. For example, you might send your logs to a log aggregator like Datadog, Sumo Logic, or Papertrail where logs can be viewed and queried. In this article, we’ll look at how we can configure an app that is hosted on Render to send its system logs to Papertrail by using Render Log Streams. By the end, you’ll have your app up and running — and logging — in no time. Creating Our Node.JS App and Hosting It With Render Render is a cloud hosting platform made for developers by developers. With Render, you can easily host your static sites, web services, cron jobs, and more. We’ll start with a simple Node.js and Express app for our demo. You can find the GitHub repo here. You can also view the app here. To follow along on your machine, fork the repo so that you have a copy running locally. You can install the project’s dependencies by running yarn install, and you can start the app by running yarn start. Easy enough! Render Log Streams demo app Now it’s time to get our app running on Render. If you don’t have a Render account yet, create one now. It’s free! Once you’re logged in, click the “New” button and then choose the “Web Service” option from the menu. Creating a new web service This will take you to the next page where you’ll select the GitHub repo you’d like to connect. If you haven’t connected your GitHub account yet, you can do so here. And if you have connected your GitHub account but haven’t given Render access to your specific repo yet, you can click the “Configure account” button. This will take you to GitHub, where you can grant access to all your repos or just a selection of them. Connecting your GitHub repo Back on Render, after connecting to your repo, you’ll be taken to a configuration page. Give your app a name (I chose the same name as my repo, but it can be anything), and then provide the correct build command (yarn, which is a shortcut for yarn install) and start command (yarn start). Choose your instance type (free tier), and then click the “Create Web Service” button at the bottom of the page to complete your configuration setup. Configuring your app With that, Render will deploy your app. You did it! You now have an app hosted on Render’s platform. Log output from your Render app’s first deployment Creating Our Papertrail Account Let’s now create a Papertrail account. Papertrail is a log aggregator tool that helps make log management easy. You can create an account for free — no credit card is required. Once you’ve created your account, click on the “Add your first system” button to get started. Adding your first system in Papertrail This will take you to the next page which provides you with your syslog endpoint at the top of the screen. There are also instructions for running an install script, but in our case, we don’t actually need to install anything! So just copy that syslog endpoint, and we’ll paste it in just a bit. Syslog endpoint Connecting Our Render App to Papertrail We now have an app hosted on Render, and we have a Papertrail account for logging. Let’s connect the two! Back in the Render dashboard, click on your avatar in the global navigation, then choose “Account Settings” from the drop-down menu. Render account settings Then in the secondary side navigation, click on the “Log Streams” tab. Once on that page, you can click the “Add Log Stream” button, which will open a modal. Paste your syslog endpoint from Papertrail into the “Log Endpoint” input field, and then click “Add Log Stream” to save your changes. Adding your log stream You should now see your Log Stream endpoint shown in Render’s dashboard. Render Log Stream dashboard Great! We’ve connected Render to Papertrail. What’s neat is that we’ve set up this connection for our entire Render account, so we don’t have to configure it for each individual app hosted on Render. Adding Logs to Our Render App Now that we have our logging configured, let’s take it for a test run. In our GitHub repo’s code, we have the following in our app.js file: JavaScript app.get('/', (req, res) => { console.log('Log - home page'); console.info('Info - home page'); console.warn('Warn - home page'); console.error('Error - home page'); console.debug('Debug - home page'); return res.sendFile('index.html', { root: 'public' }); }); When a request is made to the root URL of our app, we do a bit of logging and then send the index.html file to the client. The user doesn’t see any of the logs since these are server-side rather than client-side logs. Instead, the logs are kept on our server, which, again, is hosted on Render. To generate the logs, open your demo app in your browser. This will trigger a request for the home page. If you’re following along, your app URL will be different from mine, but my app is hosted here. Viewing Logs in Papertrail Let’s go find those logs in Papertrail. After all, they were logged to our server, but our server is hosted on Render. In your Papertrail dashboard, you should see at least two systems: one for Render itself, which was used to test the account connection, and one for your Render app (“render-log-stream-demo” in my case). Papertrail systems Click on the system for your Render app, and you’ll see a page where all the logs are shown and tailed, with the latest logs appearing at the bottom of the screen. Render app logs in Papertrail You can see that we have logs for many events, not just the data that we chose to log from our app.js file. These are the syslogs, so you also get helpful log data from when Render was installing dependencies and deploying your app! At the bottom of the page, we can enter search terms to query our logs. We don’t have many logs here yet, but where you’re running a web service that gets millions of requests per day, these log outputs can get very large very quickly. Searching logs in Papertrail Best Practices for Logging This leads us to some good questions: Now that we have logging set up, what exactly should we be logging? And how should we be formatting our logs so that they’re easy to query when we need to find them? What you’re logging and why you’re logging something will vary by situation. You may be adding logs after a customer issue is reported that you’re unable to reproduce locally. By adding logs to your app, you can get better visibility into what’s happening live in production. This is a reactive form of logging in which you’re adding new logs to certain files and functions after you realize you need them. As a more proactive form of logging, there may be important business transactions that you want to log all the time, such as account creation or order placement. This will give you greater peace of mind that events are being processed as expected throughout the day. It will also help you see the volume of events generated in any given interval. And, when things do go wrong, you’ll be able to pinpoint when your log output changed. How you format your logs is up to you, but you should be consistent in your log structure. In our example, we just logged text strings, but it would be even better to log our data in JSON format. With JSON, we can include key-value pairs for all of our messages. For each message, we might choose to include data for the user ID, the timestamp, the actual message text, and more. The beauty of JSON is that it makes querying your logs much easier, especially when viewing them in a log aggregator tool that contains thousands or millions of other messages. Conclusion There you have it — how to host your app on Render and configure logging with Render Log Streams and Papertrail. Both platforms only took minutes to set up, and now we can manage our logs with ease. Keep in mind that Render Log Streams let you send your logs to any of several different log aggregators, giving you lots of options. For example, Render logs can be sent to Sumo Logic. You just need to create a Cloud Syslog Source in your Sumo Logic account. Or, you can send your logs to Datadog as well. With that, it’s time for me to log off. Thanks for reading, Happy coding, and happy logging!
We want to save our thumbnail data to a database so that we can render our pictures to a nice HTML gallery page and finish the proof of concept for our Google Photos clone! Which database should we use and why? Which Java database API? What database tools will make our lives easier along the way? Find out in this episode of Marco Codes! What’s in the Video 00:00 Intro We'll cover what the plan is for this episode: to add database capabilities to our Google Photos clone, which currently only works with files, but doesn't store their metadata in a database table. 00:52 Where We Left Off Before jumping straight into implementing database and ORM features, we will do a quick code recap of the previous episodes, to remind ourselves how the image scanning and conversion process currently works. 01:46 Setup Whenever we want to do something with databases and Java, we need a couple of (in this case) Maven dependencies. More specifically we want to make sure to add the H2 database to our project, which we will use for production, not just for testing! We'll also add the HikariCP connection pool to it - something I do by default in every project and which is usually done automatically by frameworks like Spring Boot. 04:38 Writing a Database Schema Here, I present my current approach when doing Java database work: making sure the database schema is hand-written, thinking through table names, column names, types, etc. Hence, we'll start writing a schema.sql file for our new "media" table during this section. 10:08 Creating a DataSource Having created the schema, we'll need to create a DataSource next. As we're using HikariCP, we'll follow its documentation pages to set up the DataSource. We'll also make sure the schema.sql file written earlier gets automatically executed whenever we run our application. 12:46 Saving Thumbnail Data It's finally time to not just render thumbnail files on disk, but also save information about the generated thumbnails and original images in our brand-new database table! We'll use plain JDBC to do that and talk about its advantages and disadvantages. 14:00 Refactoring Maneuver Sometimes you just need to _see_ certain things that are very hard to explain in words. To clean up our program, we will have to change a couple of method signatures and move parameters up and down throughout the file. 16:21 Extracting Image Creation Dates At the moment, we don't properly detect the image creation date from its metadata. We'll talk about how to implement this in the future and why we'll stick with the file creation date for now. 17:10 Avoiding Duplication We'll also need to handle duplicates. If we re-run our program several times, we don't want to store the image metadata multiple times in our tables. Let's fix this here. 19:04 Inspecting H2 File DBs In case you don't know how to access H2 file databases, we will spend some time showing you how to do that from inside IntelliJ IDEA and its database tool window. 21:23 Rendering HTML Output Last but not least, we'll need to render all the information from our database to a nice, little HTML page, so we can actually browse our thumbnails! As a bonus point, this will be the simplest and probably dirtiest implementation of such an HTML page you've seen for a while - but it works! 30:30 What’s Next? Did you like what you saw? Which feature should we implement next? Let me know! Video
In this post, we'll delve into the fascinating world of operator overloading in Java. Although Java doesn't natively support operator overloading, we'll discover how Manifold can extend Java with that functionality. We'll explore its benefits, limitations, and use cases, particularly in scientific and mathematical code. We will also explore three powerful features provided by Manifold that enhance the default Java-type safety while enabling impressive programming techniques. We'll discuss unit expressions, type-safe reflection coding, and fixing methods like equals during compilation. Additionally, we'll touch upon a solution that Manifold offers to address some limitations of the var keyword. Let's dive in! Before we begin, as always, you can find the code examples for this post and other videos in this series on my GitHub page. Be sure to check out the project, give it a star, and follow me on GitHub to stay updated! Arithmetic Operators Operator overloading allows us to use familiar mathematical notation in code, making it more expressive and intuitive. While Java doesn't support operator overloading by default, Manifold provides a solution to this limitation. To demonstrate, let's start with a simple Vector class that performs vector arithmetic operations. In standard Java code, we define variables, accept them in the constructor, and implement methods like plus for vector addition. However, this approach can be verbose and less readable. Java public class Vec { private float x, y, z; public Vec(float x, float y, float z) { this.x = x; this.y = y; this.z = z; } public Vec plus(Vec other) { return new Vec(x + other.x, y + other.y, z + other.z); } } With Manifold, we can simplify the code significantly. Using Manifold's operator overloading features, we can directly add vectors together using the + operator as such: Java Vec vec1 = new Vec(1, 2, 3); Vec vec2 = new Vec(1, 1, 1); Vec vec3 = vec1 + vec2; Manifold seamlessly maps the operator to the appropriate method invocation, making the code cleaner and more concise. This fluid syntax resembles mathematical notation, enhancing code readability. Moreover, Manifold handles reverse notation gracefully. Suppose we reverse the order of the operands, such as a scalar plus a vector, Manifold swaps the order and performs the operation correctly. This flexibility enables us to write code in a more natural and intuitive manner. Let’s say we add this to the Vec class: Java public Vec plus(float other) { return new Vec(x + other, y + other, z + other); } This will make all these lines valid: Java vec3 += 5.0f; vec3 = 5.0f + vec3; vec3 = vec3 + 5.0f; vec3 += Float.valueOf(5.0f); In this code, we demonstrate that Manifold can swap the order to invoke Vec.plus(float) seamlessly. We also show that the plus equals operator support is built into the plus method support As implied by the previous code, Manifold also supports primitive wrapper objects, specifically in the context of autoboxing. In Java, primitive types have corresponding wrapper objects. Manifold handles the conversion between primitives and their wrapper objects seamlessly, thanks to autoboxing and unboxing. This enables us to work with objects and primitives interchangeably in our code. There are caveats to this, as we will find out. BigDecimal Support Manifold goes beyond simple arithmetic and supports more complex scenarios. For example, the manifold-science dependency includes built-in support for BigDecimal arithmetic. BigDecimal is a Java class used for precise calculations involving large numbers or financial computations? By using Manifold, we can perform arithmetic operations with BigDecimal objects using familiar operators, such as +, -, *, and /. Manifold's integration with BigDecimal simplifies code and ensures accurate calculations. The following code is legal once we add the right set of dependencies, which add method extensions to the BigDecimal class: Java var x = new BigDecimal(5L); var y = new BigDecimal(25L); var z = x + y; Under the hood, Manifold adds the applicable plus, minus, times, etc. methods to the class. It does so by leveraging class extensions which I discussed before. Limits of Boxing We can also extend existing classes to support operator overloading. Manifold allows us to extend classes and add methods that accept custom types or perform specific operations. For instance, we can extend the Integer class and add a plus method that accepts BigDecimal as an argument and returns a BigDecimal result. This extension enables us to perform arithmetic operations between different types seamlessly. The goal is to get this code to compile: Java var z = 5 + x + y; Unfortunately, this won’t compile with that change. The number five is a primitive, not an Integer, and the only way to get that code to work would be: Java var z = Integer.valueOf(5) + x + y; This isn’t what we want. However, there’s a simple solution. We can create an extension to BigDecimal itself and rely on the fact that the order can be swapped seamlessly. This means that this simple extension can support the 5 + x + y expression without a change: Java @Extension public class BigDecimalExt { public static BigDecimal plus(@This BigDecimal b, int i) { return b.plus(BigDecimal.valueOf(i)); } } List of Arithmetic Operators So far, we focused on the plus operator, but Manifold supports a wide range of operators. The following table lists the method name and the operators it supports: Operator Method + , += plus -, -= minus *, *= times /, /= div %, %= rem -a unaryMinus ++ inc -- dec Notice that the increment and decrement operators don’t have a distinction between the prefix and postfix positioning. Both a++ and ++a would lead to the inc method. Index Operator The support for the index operator took me completely off guard when I looked at it. This is a complete game-changer… The index operator is the square brackets we use to get an array value by index. To give you a sense of what I’m talking about, this is valid code in Manifold: Java var list = List.of("A", "B", "C"); var v = list[0]; In this case, v will be “A” and the code is the equivalent of invoking list.get(0). The index operators seamlessly map to get and set methods. We can do assignments as well using the following: Java var list = new ArrayList<>(List.of("A", "B", "C")); var v = list[0]; list[0] = "1"; Notice I had to wrap the List in an ArrayList since List.of() returns an unmodifiable List. But this isn’t the part I’m reeling about. That code is “nice.” This code is absolutely amazing: Java var map = new HashMap<>(Map.of("Key", "Value")); var key = map["Key"]; map["Key"] = "New Value"; Yes! You’re reading valid code in Manifold. An index operator is used to lookup in a map. Notice that a map has a put() method and not a set method. That’s an annoying inconsistency that Manifold fixed with an extension method. We can then use an object to look up within a map using the operator. Relational and Equality Operators We still have a lot to cover… Can we write code like this (referring to the Vec object from before): Java if(vec3 > vec2) { // … } This won’t compile by default. However, if we add the Comparable interface to the Vec class this will work as expected: Java public class Vec implements Comparable<Vec> { // … public double magnitude() { return Math.sqrt(x x + y y + z * z); } @Override public int compareTo(Vec o) { return Double.compare(magnitude(), o.magnitude()); } } These >=, >, <, <= comparison operators will work exactly as expected by invoking the compareTo method. But there’s a big problem. You will notice that the == and != operators are missing from this list. In Java, we often use these operators to perform pointer comparisons. This makes a lot of sense in terms of performance. We wouldn’t want to change something so inherent in Java. To avoid that, Manifold doesn’t override these operators by default. However, we can implement the ComparableUsing interface, which is a sub-interface of the Comparable interface. Once we do that the == and != will use the equals method by default. We can override that behavior by overriding the method equalityMode() which can return one of these values: CompareTo — will use the compareTo method for == and != Equals (the default) — will use the equals method Identity — will use pointer comparison as is the norm in Java That interface also lets us override the compareToUsing(T, Operator) method. This is similar to the compareTo method but lets us create operator-specific behavior, which might be important in some edge cases. Unit Expressions for Scientific Coding Notice that Unit expressions are experimental in Manifold. But they are one of the most interesting applications of operator overloading in this context. Unit expressions are a new type of operator that significantly simplifies and enhances scientific coding while enforcing strong typing. With unit expressions, we can define notations for mathematical expressions that incorporate unit types. This brings a new level of clarity and types of safety to scientific calculations. For example, consider a distance calculation where speed is defined as 100 miles per hour. By multiplying the speed (miles per hour) by the time (hours), we can obtain the distance as such: Java Length distance = 100 mph * 3 hr; Force force = 5kg * 9.807 m/s/s; if(force == 49.035 N) { // true } The unit expressions allow us to express numeric values (or variables) along with their associated units. The compiler checks the compatibility of units, preventing incompatible conversions and ensuring accurate calculations. This feature streamlines scientific code and enables powerful calculations with ease. Under the hood, a unit expression is just a conversion call. The expression 100 mph is converted to: Java VelocityUnit.postfixBind(Integer.valueOf(100)) This expression returns a Velocity object. The expression 3 hr is similarly bound to the postfix method and returns a Time object. At this point, the Manifold Velocity class has a times method, which, as you recall, is an operator, and it’s invoked on both results: Java public Length times( Time t ) { return new Length( toBaseNumber() * t.toBaseNumber(), LengthUnit.BASE, getDisplayUnit().getLengthUnit() ); } Notice that the class has multiple overloaded versions of the times method that accept different object types. A Velocity times Mass will produce Momentum. A Velocity times Force results in Power. Many units are supported as part of this package even in this early experimental stage, check them out here. You might notice a big omission here: Currency. I would love to have something like: Java var sum = 50 USD + 70 EUR; If you look at that code, the problem should be apparent. We need an exchange rate. This makes no sense without exchange rates and possibly conversion costs. The complexities of financial calculations don’t translate as nicely to the current state of the code. I suspect that this is the reason this is still experimental. I’m very curious to see how something like this can be solved elegantly. Pitfalls of Operator Overloading While Manifold provides powerful operator overloading capabilities, it's important to be mindful of potential challenges and performance considerations. Manifold's approach can lead to additional method calls and object allocations, which may impact performance, especially in performance-critical environments. It's crucial to consider optimization techniques, such as reducing unnecessary method calls and object allocations, to ensure efficient code execution. Let’s look at this code: Java var n = x + y + z; On the surface, it can seem efficient and short. It physically translates to this code: Java var n = x.plus(y).plus(z); This is still hard to spot but notice that in order to create the result, we invoke two methods and allocate at least two objects. A more efficient approach would be: Java var n = x.plus(y, z); This is an optimization we often do for high-performance matrix calculations. You need to be mindful of this and understand what the operator is doing under the hood if performance is important. I don’t want to imply that operators are inherently slower. In fact, they’re as fast as a method invocation, but sometimes the specific method invoked and the number of allocations are unintuitive. Type Safety Features The following aren’t related to operator overloading, but they were a part of the second video, so I feel they make sense as part of a wide-sweeping discussion on type safety. One of my favorite things about Manifold is its support of strict typing and compile time errors. To me, both represent the core spirit of Java. JailBreak: Type-Safe Reflection @JailBreak is a feature that grants access to the private state within a class. While it may sound bad, @JailBreak offers a better alternative to using traditional reflection to access private variables. By jailbreaking a class, we can access its private state seamlessly, with the compiler still performing type checks. In that sense, it’s the lesser of two evils. If you’re going to do something terrible (accessing private state), then at least have it checked by the compiler. In the following code, the value array is private to String, yet we can manipulate it thanks to the @JailBreak annotation. This code will print “Ex0osed…”: Java @Jailbreak String exposedString = "Exposed..."; exposedString.value[2] = '0'; System.out.println(exposedString); JailBreak can be applied to static fields and methods as well. However, accessing static members requires assigning null to the variable, which may seem counterintuitive. Nonetheless, this feature provides a more controlled and type-safe approach to accessing the internal state, minimizing the risks associated with using reflection. Java @Jailbreak String str = null; str.isASCII(new byte[] { 111, (byte)222 }); Finally, all objects in Manifold are injected with a jailbreak() method. This method can be used like this (notice that fastTime is a private field): Java Date d = new Date(); long t = d.jailbreak().fastTime; Self Annotation: Enforcing Method Parameter Type In Java, certain APIs accept objects as parameters, even when a more specific type could be used. This can lead to potential issues and errors at runtime. However, Manifold introduces the @Self annotation, which helps enforce the type of object passed as a parameter. By annotating the parameter with @Self, we explicitly state that only the specified object type is accepted. This ensures type safety and prevents the accidental use of incompatible types. With this annotation, the compiler catches such errors during development, reducing the likelihood of encountering issues in production. Let’s look at the MySizeClass from my previous posts: Java public class MySizeClass { int size = 5; public int size() { return size; } public void setSize(int size) { this.size = size; } public boolean equals(@Self Object o) { return o != null && ((MySizeClass)o).size == size; } } Notice I added an equals method and annotated the argument with Self. If I remove the Self annotation, this code will compile: Java var size = new MySizeClass(); size.equals(""); size.equals(new MySizeClass()); With the @Self annotation, the string comparison will fail during compilation. Auto Keyword: A Stronger Alternative to Var I’m not a huge fan of the var keyword. I feel it didn’t simplify much, and the price is coding to an implementation instead of to an interface. I understand why the devs at Oracle chose this path. Conservative decisions are the main reason I find Java so appealing. Manifold has the benefit of working outside of those constraints, and it offers a more powerful alternative called auto. auto can be used in fields and method return values, making it more flexible than var. It provides a concise and expressive way to define variables without sacrificing type safety. Auto is particularly useful when working with tuples, a feature not yet discussed in this post. It allows for elegant and concise code, enhancing readability and maintainability. You can effectively use auto as a drop-in replacement for var. Finally Operator overloading with Manifold brings expressive and intuitive mathematical notation to Java, enhancing code readability and simplicity. While Java doesn't natively support operator overloading, Manifold empowers developers to achieve similar functionality and use familiar operators in their code. By leveraging Manifold, we can write more fluid and expressive code, particularly in scientific, mathematical, and financial applications. The type of safety enhancements in Manifold makes Java more… Well, “Java-like.” It lets Java developers build upon the strong foundation of the language and embrace a more expressive type-safe programming paradigm. Should we add operator overloading to Java itself? I'm not in favor. I love that Java is slow, steady, and conservative. I also love that Manifold is bold and adventurous. That way, I can pick it when I'm doing a project where this approach makes sense (e.g., a startup project) but pick standard conservative Java for an enterprise project.
I was recently involved in the TypeScript migration of the ZK Framework. For those who are new to ZK, ZK is the Java counterpart of the Node.js stack; i.e., ZK is a Java full-stack web framework where you can implement event callbacks in Java and control frontend UI with Java alone. Over more than a decade of development and expansion, we have reached a code base of more than 50K JavaScript and over 400K Java code, but we noticed that we are spending almost the same amount of time and effort in maintaining Java and JavaScript code, which means, in our project, JavaScript is 8 times harder to maintain than Java. I would like to share the reason we made the move to migrate from JavaScript to TypeScript, the options we evaluated, how we automated a large part of the migration, and how it changed the way we work and gave us confidence. The Problem ZK has been a server-centric solution for more than a decade. In recent years, we noticed the need for cloud-native support and have made this the main goal of our upcoming new version, ZK 10. The new feature will alleviate servers’ burden by transferring much of the model-view-model bindings to the client side so that the server side becomes as stateless as possible. This brings benefits such as reduced server memory consumption, simplified load balancing for ZK 10 clustered backends, and potentially easier integration with other frontend frameworks. We call this effort “Client MVVM.” However, this implies huge growth of JavaScript code. As we are already aware that JavaScript is harder to maintain, it is high time that we made our JavaScript codebase easier to work with at 50k lines of code. Otherwise, extending the existing JavaScript code with the whole MVVM stack will become Sisyphean, if not impossible. We started to look at why Java has higher productivity and how we can bring the same productivity to our client side. Why Does Java Beat JavaScript at Large-Scale Development? What did Java get right to enable us an 8x boost in productivity? We conclude that the availability of static analysis is the primary factor. We design and write programs long before programs are executed and often before compilation. Normally, we refactor, implement new features, and fix bugs by modifying source code instead of modifying the compiler-generated machine code or the memory of the live program. That is, programmers analyze programs statically (before execution) as opposed to dynamically (during execution). Not only is static analysis more natural to humans, but static analysis is also easier to automate. Nowadays, compilers not only generate machine code from source code but also perform the sort of analysis that humans would do on source code like name resolution, initialization guards, dead-code analysis, etc. Humans can still perform static analysis on JavaScript code. However, without the help of automated static analyzers (compilers and linters), reasoning with JavaScript code becomes extremely error-prone and time-consuming. What value does the following JavaScript function return? It’s actually undefined instead of 1. Surprised? JavaScript function f() { return 1 } Compare this with Java, where we have the compiler to aid our reasoning “as we type." With TypeScript, the compiler will perform “automatic semicolon insertion” analysis followed by dead code analysis, yielding: Humans can never beat the meticulousness of machines. By delegating this sort of monotonous but critical tasks to machines, we can free up a huge amount of time while achieving unprecedented reliability. How Can We Enable Static Analysis for JavaScript? We evaluated the following 6 options and settled on TypeScript due to its extensive ECMA standard conformance, complete support for all mainstream JS module systems, and massive ecosystem. We provide a comparison of them at the end of the article. Here is a short synopsis. Google’s Closure Compiler: All types are specified in JSDoc, thereby bloating code and making inline type assertion very clumsy Facebook’s Flow: A much smaller ecosystem in terms of tooling and libraries compared to TypeScript Microsoft’s TypeScript: The most mature and complete solution Scala.js: Subpar; emitted JavaScript code ReScript: Requires a paradigm shift to purely functional programming; otherwise, very promising Semi-Automated Migration to TypeScript Prior to the TypeScript migration, our JavaScript code largely consisted of prototype inheritance via our ad-hoc zk.$extends function, as shown on the left-hand side. We intend to transform it to the semantically equivalent TypeScript snippet on the right-hand side. JavaScript Module.Class = zk.$extends(Super, { field: 1, field_: 2, _field: 3, $define: { field2: function () { // Do something in setter. }, }, $init: function() {}, method: function() {}, method_: function() {}, _method: function() {}, }, { staticField: 1, staticField_: 2, _staticField: 3, staticMethod: function() {}, staticMethod_: function() {}, _staticMethod: function() {}, }); TypeScript export namespace Module { @decorator('meta-data') export class Class extends Super { public field = 1; protected field_ = 2; private _field = 3; private _field2?: T; public getField2(): T | undefined { return this._field2; } public setField2(field2: T): this { const old = this._field2; this._field2 = field2; if (old !== field2) { // Do something in setter. } return this; } public constructor() { super(); } public method() {} protected method_() {} private _method() {} public static staticField = 1; protected static staticField_ = 2; private static _staticField = 3; public static staticMethod() {} protected static staticMethod_() {} private static _staticMethod() {} } } There are hundreds of such cases among which many have close to 50 properties. If we were to rewrite manually, it would not only take a very long time but be riddled with typos. Upon closer inspection, the transformation rules are quite straightforward. It should be subject to automation! Then, the process would be fast and reliable. Indeed, it is a matter of parsing the original JavaScript code into an abstract syntax tree (AST), modifying the AST according to some specific rules, and consolidating the modified AST into formatted source code. Fortunately, there is jscodeshift that does the parsing and consolidation of source code and provides a set of useful APIs for AST modification. Furthermore, there is AST Explorer that acts as a real-time IDE for jscodeshift so we can develop our jscodeshift transformation script productively. Better yet, we can author a custom typescript-eslint rule that spawns the jscodeshift script upon the presence of zk.$extends. Then, we can automatically apply the transformation to the whole codebase with the command eslint --fix. Let’s turn to the type T in the example above. Since jscodeshift presents us with the lossless AST (including comments), we can author a visitor that extracts the @return JSDoc of getter() if it can be found; if not, we can let the visitor walk into the method body of getter() and try to deduce the type T, e.g., deduce T to be string if the return value of getter() is the concatenation of this._field2 with some string. If still no avail, specify T as void, so that after jscodeshift is applied, the TypeScript compiler will warn us about a type mismatch. This way we can perform as much automated inference as possible before manual intervention and the sections required for manual inspection will be accurately surfaced by the compiler due to our fault injection. Besides whole file transformations like jscodeshift that can only run in batch mode, the typescript-eslint project allows us to author small and precise rules that update source code in an IDE, like VSCode, in real-time. For instance, we can author a rule that marks properties of classes or namespaces that begin or end with single underscores as @internal, so that documentation extraction tools and type definition bundlers can ignore them: TypeScript export namespace N { export function _helper() {} export class A { /** * Description ... */ protected doSomething_() {} } } TypeScript export namespace N { /** @internal */ export function _helper() {} export class A { /** * Description ... * @internal */ protected doSomething_() {} } } Regarding the example above, one would have to determine the existence of property-associating JSDoc, the pre-existence of the @internal tag, and the position to insert the @internal tag if missing. Since typescript-eslint also presents us with a lossless AST, it is easy to find the associating JSDoc of class or namespace properties. The only non-trivial task left is to parse, transform, and consolidate JSDoc fragments. Fortunately, this can be achieved with the TSDoc parser. Similar to activating jscodeshift via typescript-eslint in the first example, this second example is a case of delegating JSDoc transformation to the TSDoc parser upon a typescript-eslint rule match. With sufficient knowledge of JavaScript, TypeScript, and their build systems, one can utilize jscodeshift, typescript-eslint, AST Explorer, and the TSDoc parser to make further semantic guarantees of one’s codebase, and whenever possible, automate the fix with the handy eslint --fix command. The importance of static analysis cannot be emphasized enough! Bravo! Zk 10 Has Completely Migrated to TypeScript For ZK 10, we have actively undergone static analysis with TypeScript for all existing JavaScript code in our codebase. Not only were we able to fix existing errors (some are automatic with eslint --fix), thanks to the typescript-eslint project that enables lots of extra type-aware rules, we also wrote our own rules, and we are guaranteed to never make those mistakes ever again in the future. This means less mental burden and a better conscience for the ZK development team. Our Client MVVM effort also becomes much more manageable with TypeScript in place. The development experience is close to that of Java. In fact, some aspects are even better, as TypeScript has better type narrowing, structural typing, refinement types via literal types, and intersection/union types. As for our users, ZK 10 has become more reliable. Furthermore, our type definitions are freely available, so that ZK 10 users can customize the ZK frontend components with ease and confidence. In addition, users can scale their applications during execution with Client MVVM. Adopting TypeScript in ZK 10 further enables us to scale correctness during development. Both are fundamental improvements. Annex: Comparing Static Typing Solutions for JavaScript Google’s Closure Compiler Type system soundness unknown; Assumed as unsound, as sound type systems are rare @interface denotes nominal types whereas @record denotes structural types All type annotations are specified in comments leading to code bloat, and comments often go out of sync with the code. Most advanced and aggressive code optimization among all options listed here Find more information on GitHub Facebook’s Flow Unsound type system Nominal types for ES6 classes and structural types for everything else, unlike TypeScript where all types are structural; whereas in Java, all types are nominal Compared to TypeScript, Flow has a much smaller ecosystem in terms of tooling (compatible formatter, linter, IDE plugin) and libraries (TypeScript even has the DefinitelyTyped project to host type definitions on NPM) Find more information in Flow Documentation Microsoft’s TypeScript Supports all JavaScript features and follows the ECMA standard closely even for subtleties: class fields and TC39 decorators Seamless interoperation between all mainstream JavaScript module systems: ES modules, CommonJS, AMD, and UMD Unsound type system All types are structural, which is the most natural way to model dynamic types statically, but the ability to mark certain types as nominal would be good to have. Flow and the Closure Compiler have an edge in this respect. Also supports Closure-Compiler-style type annotations in comments Best-in-class tooling and a massive ecosystem; built-in support by VSCode; hence, its availability is almost ubiquitous Each enum variant is a separate subtype, unlike all other type systems we have ever encountered, including Rust, Scala 3, Lean 4, and Coq Find more information in The TypeScript Handbook Scala.js Leverages the awesome type system of Scala 3, which is sound Seamlessly shares build scripts (sbt) and code with any Scala 3 project The emitted JavaScript code is often bloated and sometimes less efficient than that of the Closure Compiler, Flow, and TypeScript. Learn more on the Scala.js site ReScript Touted to have a sound type system (where is the proof?) like that of Scala 3, but the syntax of ReScript is closer to JavaScript and OCaml The type system is highly regular like all languages in the ML family, allowing for efficient type checking, fast JavaScript emission, and aggressive optimizations. The emitted JavaScript code is very readable. This is a design goal of ReScript. Interoperation with TypeScript via genType As of ReScript 10.1, async/await is supported. Might require familiarity with more advanced functional programming techniques and purely functional data structures Learn more in the ReScript Language Manual documentation