Since Java 1.2/1.3, one JVM thread corresponds to an exact underlying OS thread. When you executes
new Thread(runnable).start(), a syscall is made and a new OS thread is spawned. This is expensive in terms of computation.
A web server can start one thread for each request. Starting thousends of threads may cause a system crash, and that's why we use thread pools, to limit the amount of threads and reuse them.
Project Loom creates an execution model where one JVM thread does not necessarily corresponds to an OS thread. Instead, these threads are managed by the JVM. They are virtual threads. Virtual threads run on one or more OS threads (called carrier threads), the same way a process can run in several CPU in a multi-core system.
Creating and blocking virtual threads is a cheap operation. Java runtime transforms blocking calls into non-blocking calls and abstracts the developer from this management, unlike tools like Netty, where the developer is responsible of managing those blocking issues.
Since this blocking-to-non-blocking transformation is transparent to the developer, a new development model is not needed (like async/await, promises, futures...). Instead, you can start virtual threads using basically the same API used for "regular" threads:
// it creates a "regular" thread Thread.builder().task(runnable).build(); // it creates a virtual thread Thread.builder().virtual().task(runnable).build();
Project Loom (not ready to use at the time of this writing) will improve Java concurrent programming by simplifying developers the blocking instructions management.
For more information about Project Loom, visit the wiki page: https://wiki.openjdk.java.net/display/loom/Main.