Robot running on an athletics track

Devoxx 2024: what memorable new tech was on show?

This year, the Devoxx conference is celebrating its 12th year in France. Established in the early 2000s by the Belgian JUG (Java User Group), it was exported for the first time to France in 2012. The event is aimed at the developer, publisher and web development business community, and mainly focused on the Java ecosystem. It provides an opportunity to check out new tech in terms of languages and frameworks, as well as the latest innovations: and don’t worry, we’ll get onto the topic everybody’s talking about… artificial intelligence

What’s new in the Java ecosystem?

Historically, Devoxx is above all a conference focused on Java and its new features. So this year, the conferences looked at Java 21, the latest LTS (Long-Term Support) version, focusing specifically on its adoption and optimisation.

Compared to previous years’ events, 2024 saw a sharp drop in native Java technologies, with few conferences on this topic. But several presentations covered Warmup and optimisation, offering interesting performance improvement insights.

Warmup and optimisation

The main aim of improving Warmup process (or start-up time) and optimising the application’s overall performance, particularly by reducing its memory footprint, is to make Java applications better suited to scalable hosting environments. Usually, a Java application is effective once it’s up and running, but can take several seconds to start up, which makes it incompatible with hosting architectures like serverless mode.

The current state of native compilation technologies for Java

At previous years’ events, there were a number of conferences on native compilation technologies (such as Spring Native, Quarkus, Micronaut, etc.). The promise of a significant improvement in start-up time was attractive, but this approach requires significant adaptation: not all libraries are compatible, technical reflection issues come up, and the build process requires more time and resources than classic compilation.

This year, little attention was focused on this aspect, which suggests that native compilation technologies for Java don’t yet seem to have found their optimal position. The technical challenges involved in this approach remain too significant to make Java competitive in the context of serverless applications. In comparison, rival languages like Python, Node and Go naturally offer very fast start-up times without any added complexity.

Java and artificial intelligence: The rise of LangChain4J

This year at Devoxx a large number of conferences explored combining artificial intelligence and Java, mainly showcasing LangChain4J.
What exactly is it? It is comparable to Hibernate for natural language processing for Java. It allows the integration of several natural language processing and large language models (LLMs) in a Java application, by simplifying their use without worrying about the source. Currently, LangChain4J is compatible with approximately 15 different language models.

This trend shows that Java is now mature to integrate artificial intelligence features; all that’s left is to explore and find relevant use cases.

Memorable new tech

In summary, few major surprises, with the notable exception of LangChain4J, which is being adopted. So Java 21 appears to be a major version with public support announced up to 2028, and is already being adopted by the community.

Currently, Java’s philosophy is marked by its relative maturity in relation to other languages, which results in a pragmatic approach. Rather than looking for new features at all costs, Java is opting for an approach where it selects elements already proven in other languages.

What new tech is there in the field of Artificial Intelligence?

Exploring RAG to boost data precision

Retrieval Augmented Generation (RAG) combines the best of both worlds between AI-based generation and data precision. But how does it work? Fundamentally, it’s an AI, generally a Large Language Model (LLM), which accesses real data such as documents, databases, text files, etc., to construct its responses. This data differs from the data contained in the AI model. Access to this real data makes responses more accurate and sourced, meaning that the source of data not included in its initial training can be identified. So, the LLM offers greater precision and updating, and traces the source of the data.

So several conferences were dedicated to the presentation, creation and setup of RAG in various contexts.

The development of LLMs in the field of AI: increasing common sense and security

Another major trend in the field of artificial intelligence is securing Large Language Models (LLMs). The risks involved in generative LLMs are real and require effective protection be put in place. This includes measures against prompt injections, securing input and output data, etc.

After the infatuation of previous years, a more prudent approach is now emerging, supported by more comprehensive, practical feedback. In summary, the integration of an LLM at a business is not just simple setup: security, data update management, and many other aspects need to be taken into account.

AI and code quality assessment

In terms of development, several conferences explored the issue of documentation and code quality, particularly highlighting the integration of artificial intelligence in this field. For the time being, there doesn’t seem to be a miracle solution fully harnessing the capabilities of AI in this context, beyond close supervision of what the AI generates, as well as continuing to advocate good programming and maintainability practices with clean code.

Architecture: what challenges and trends?

The architectural aspect was particularly riveting this year. On top of tool, framework and model presentations, there were a lot of insightful feedback sessions. 

Microservices for everything is over!

A number of project feedback sessions revealed the sometimes superfluous complexity of microservices. Various reports underlined the importance of aligning functional needs with architecture. This synchronisation is possible using architectural patterns such as hexagonal architectures or techniques like Domain Driven Development (DDD). The message sent highlights the use of modular monoliths, which provide flexibility for potential transitioning to a microservice architecture when necessary.

Focus on data streaming

Several conferences showcased streaming and queuing system setups. Unlike previous years, Kafka was used less frequently, signalling a return to pragmatism in the adoption of widespread streaming.

The “Apache Pulsar: finally an alternative to Kafka?” conference clearly set out the principles of streaming and queuing.
Message Queuing:

  • Used for communication between services, with variable-duration tasks.

  • Favours decoupling and does not require sequential processing.

  • Elasticity is managed consumer-side (consumers are added dynamically).

  • Possibility of managing congestion with features like Dead Letter Queue.  

Message Streaming:

  • Ideal for real-time monitoring, providing very high performance. 

  • Allows data historization.

  • Processes messages in order and can manage massive volumes of messages.

Currently, streaming is often used inappropriately for queueing use cases, which causes significant, complexities, such as added partitions.

In terms of Kafka, conferences mainly focused on feedback, including on the system set up by PMU to collect, process, store and deliver horse racing and betting data. With multiple data sources emitting at different frequencies and in different formats, it’s essential to aggregate, format and make this data reliable and available in relatively short times. In this case, Kafka was used via an AWS-managed service, which certainly simplified its use and administration.

Focus on observability

OpenTelemetry has set a fresh benchmark in terms of observation. Its integration into various ecosystems including Java, containers, Kubernetes and Traefik didn’t go unnoticed.

OpenTelemetry seems on track to become a standard for collecting logs, metrics and traces. It’s a set of tools, software development kits (SDK) and APIs designed to simplify observation of applications, covering the fields of logs, metrics, traces, etc.

In summary: a return to pragmatism

In brief, we are seeing a more pragmatic approach in the solutions implemented and recommended, unlike what we observed a few years earlier with what could be called “Kube and Kafka for everything”. We are witnessing a “back to basics” approach: the importance of correctly using clean architecture, clean code and Domain Driven Design (DDD) methodologies, as well as appropriately using tools without seeking to repurpose them or follow trends just for fun.

DevOps expansion

Has DevOps reached maturity?

There were a number of feedback sessions on the transition to DevOps at big businesses like Michelin, BPI, and PMU, particularly from the point of view of architecture. This development suggests DevOps is gaining ground at major structures. Previously, its adoption was generally limited to small-scale projects or start-ups.

These big business DevOps transitions require fundamental changes, even at service and infrastructure supplier level. However, it ultimately allows them to make more frequent deliveries with fewer faults, creating value more quickly and efficiently.

Development of containerization and Docker

In the field of containerisation and Docker, we are seeing a paradigm shift. After years of infatuation with Kubernetes, conferences and presentations are now more focused on the adoption and maturing of tools.

This is particularly demonstrated by initiatives like Compose Bridge, which facilitate transitioning from docker-compose to Kubernetes. Plus tools like Traefik (a reverse proxy) adopt or are planning to adopt standards like OpenTelemetry or WebAssembly.

Databases and Data: the emergence of vector search

This year, a large number of topics were addressed regarding the functioning and optimisation of relational databases. But NoSQL databases were less visible. Their adoption in the web field appears limited to specific use cases such as high-volume data processing, real time, or IoT. Relational databases appear unlikely to see their popularity fall any time soon, as their positioning in popularity rankings shows. 

Except for Elasticsearch, we didn’t notice the presence of NoSQL databases this year. Elasticsearch was in the spotlight again, particularly with the emergence of vector search. 

What is vector search?  

Let’s take the example developed in the conference “Search on steroids - a history of semantics”, based on the application “jolimoi”, a beauty product and cosmetics e-commerce site. Let’s imagine you’re searching for a product for curly hair. A classic search would only return results for products with a data sheet explicitly containing these terms. This means you could miss out on products that could also interest you, such as products intended for frizzy hair, for instance. Vector search solves this issue. 

How can search be optimised with the power of vectors? 

The process involves using an “embedding” machine learning model to review our product catalogue, transforming it into vectors. This model compares the vectors of products it considers similar, such as products for curly hair and products for frizzy hair, for instance. Once our vector model is available, the user’s search is also “vectorised” by the model to find its closest neighbours. This allows us to make searches that didn’t match “classic” queries match. 

And combined search?  

Combined search combines the results of a vector search and a classic search by merging both sets of results. They can be merged in several ways, particularly by prioritising based on match scores. Generally, this approach guarantees that the first results, if any, closely match the initial search, and that the following results, which are more varied due to vector search, are included too.  

New tech: what about security?

As previously mentioned, security was mainly addressed in the context of Artificial Intelligence. However, it’s important to note that there were also more traditional conferences on securing infrastructure, particularly Kubernetes containers and clusters, as well as secrecy management practices. These presentations included “The end of shared passwords with Vault and Boundary” and “The nightmare of attackers: infrastructure without secrecy”.

Technological advances now allow us to guarantee a high level of security without having too much of an impact on developers and operational teams. This approach favours a good level of security by providing less restrictive and more easily adopted tools, reducing risks of workarounds. It also contributes to productivity and work comfort for both development (devs) and operational (ops) teams.

Overall, most of the topics addressed featured a security component, which is extremely positive!

This year’s Devoxx proved a precious learning tool and provided fresh perspectives. We observed a pragmatic approach to technologies, focusing on rational adoption and optimisation of existing tools. The lively discussions on Java, AI, architecture and DevOps illustrated a community determined to align technological solutions with real needs.

In addition to emerging trends and new tech, this conference reminds us of the importance of using technologies wisely to precisely meet specific requirements, without giving in to short-lived infatuations. Simplicity and effectiveness seem to be the flagship values for the coming years.

We’re looking forward to seeing how these trends will develop, and are keen to keep sharing our experiences and findings. We’ll see you next year for new adventures at Devoxx!

WANT TO TALK TO AN EXPERT?

Contact us