Most Commonly Asked Java Interview Questions

Most Commonly Asked Java Interview Questions | It includes the top 28 questions mostly asked in the interview. It covers core Java, collection, Garbage collection, and Spring.

  • ArrayList usually uses the internal data structure as the dynamic array to store the elements while in the case of LinkedList, the internal data structure is doubly linked list to store elements. 
  • The manipulation of ArrayList is slow because it internally uses the array but the searching is very fast in the case of ArrayList. In the case of LinkedList, the manipulation is fast but the searching is very slow because it doesn’t implement the RandomAccessInterface.
  • ArrayList can act as a List because it can implement only a list but the LinkedList can act as not only a list but also a queue because it implements the list and dequeue interfaces. 
  • ArrayList is better for storing and accessing data and LinkedList is better for manipulating the data.

Lazy Initialization is a design pattern used to defer the initialization of an object until it is needed. In the context of Hibernate, lazy initialization is used to delay the loading of associated entities until they are accessed for the first time.

For example, consider an Employee class that has an Address mapped to it using a one-to-many or many-to-one relationship. If this relationship is configured with FetchType.LAZY, the Address entity will not be loaded from the database until it is explicitly accessed. This can be beneficial in reducing the initial load time, as only the necessary data is fetched initially.

Advantages

  • Reduced Initial Load Time: By deferring the loading of associated entities, the initial load time is significantly reduced compared to eager loading, where all associated entities are loaded immediately.
  • Efficient Resource Utilization: Resources are used more efficiently as only the required data is fetched from the database.

Disadvantages

  • Potential Performance Impact: Lazy initialization can lead to additional database hits when the associated entities are accessed, which may impact performance. This is because a proxy object is used to represent the association, and accessing it triggers a database query.
  • The first level cache is by default enabled while the second level cache is by default disabled, we have to programmatically enable it in the Hibernate application. 
  • The first-level cache can never be disabled and the second-level cache is by default not enabled at first, but it can be disabled by removing your program at configuration. 
  • The first-level cache is associated with the session while the second-level cache is associated with the SessionFactory. So when the session gets closed, then the objects that are cached for the first level are lost, but that’s not the case with the second level cache. 
  • The data stored in the first-level cache is accessible only to the session that maintains it while the second-level cache is accessible to the whole program. 
  • Hashtable is synchronized and hence thread-safe. But the HashMap is not synchronized and hence is not at all thread-safe. Because of that HashMap becomes much faster as compared to Hashtable. 
  • HashMap allows the insertion of one null key and multiple null values, whereas a Hashtable is much more restrictive because it does not allow any null key or null value. 
  • Hashtable was introduced in the initial version of Java1.0 while HashMap was introduced in Java 1.2. 
  • If you want to make a HashMap synchronized, you always have the Collections class synchronizedMap() method. But Hashtable is by default synchronized, so you don’t need anything to do that. 
  • When it comes to iteration, HashMap can be always traversed by an iterator, which is a universal iteration technique. But Hashtable has to be mostly traversed with Enumerator. But you can also use iterator because iterators as the usual universal iteration technique. 
  • The iterator in HashMap is fail-fast while the enumerator in Hashtable is fail-safe. 
  • HashMap extends the abstractMap class while Hashtable extends the dictionary class. 

Serialization in Java is a mechanism for converting the state of an object into a byte stream. This byte stream can be saved to a file or transmitted over a network, making it platform-independent. The process of saving an object’s state to a file or transmitting it is known as serialization. The byte stream created is platform-independent, meaning it can be deserialized on any platform to recreate the original object.

Serialization is essential for several reasons:

  • Persistence: It allows the state of an object to be saved to a storage medium, such as a file or a database so that it can be retrieved and used later.
  • Communication: It enables objects to be transmitted over a network between different Java Virtual Machines (JVMs). This is particularly useful in distributed systems where objects need to be shared between different applications.
  • Caching: Serialized objects can be cached to improve performance by avoiding the need to recreate objects from scratch.
  • Deep Copy: Serialization can be used to create a deep copy of an object by serializing and then deserializing it.

Inversion of Control (IoC) is a design principle in which the control of object creation and management is transferred from the user to a container or framework. In the context of Spring, the IoC container, such as the BeanFactory or ApplicationContext, takes over the responsibility of instantiating, configuring, and managing the lifecycle of objects (beans).

Dependency Injection (DI) is a specific implementation of IoC. It allows the container to inject dependencies into a class, rather than the class creating its own dependencies. This promotes loose coupling and enhances the testability and maintainability of the code.

In Spring, DI can be achieved in three ways:

  • Java-based Configuration: Using Java code to define the beans and their dependencies.
  • XML-based Configuration: Using XML files to configure the beans and their dependencies.
  • Annotation-based Configuration: Using annotations such as @Autowired to automatically wire dependencies.

A WeakHashMap is a specialized implementation of the Map interface in Java. It functions similarly to a regular HashMap, but with a key difference in how it handles memory management for its keys.

In a regular HashMap, if you use an object as a key (e.g., an Employee object) and later remove all references to this key object, the HashMap will still retain the key in memory. This means that even if you set the key reference to null, the HashMap will continue to hold onto it, preventing it from being garbage collected.

However, a WeakHashMap behaves differently. If you set the key reference to null and there are no other references to it, the WeakHashMap allows Java’s garbage collector to remove the key from memory. This means that the WeakHashMap does not stubbornly hold onto objects as a regular HashMap does.

In simpler terms, while a regular HashMap retains objects even when they are no longer needed, a WeakHashMap releases them when they are no longer in use.

public class Employee {
    private String name;

    public Employee(String name) {
        this.name = name;
    }

    @Override
    public String toString() {
        return "Employee [name=" + name + "]";
    }
}
public class Test {
   public static void main(String[] args) {
       Map<Employee, String> hashMap = new HashMap<>();
       Map<Employee, String> weakHashMap = new WeakHashMap<>();

       Employee emp = new Employee("Jerry");

       hashMap.put(emp, "RegularHashMap");
       weakHashMap.put(emp, "WeakHashMap");

       emp = null;
       System.gc();

       System.out.println(hashMap.keySet());
       System.out.println(weakHashMap.keySet());
   }
}

Expected Output:-

If the garbage collector doesn’t remove the Employee object from memory:-

A functional interface is a concept introduced in Java 8. Prior to Java 8, interfaces could not contain default or static methods, and there were no restrictions on the number of abstract methods they could have. However, with the enhancements in Java 8, significant changes were made to interfaces.

A functional interface is a specific type of interface that can have only one abstract method. It can, however, include multiple default and static methods without any restrictions. The key restriction is that it can only have one abstract method.

Functional interfaces are crucial in Java as they serve as the target types for lambda expressions, which are a way to implement functional programming in Java. Without a functional interface, lambda expressions would have no reference and thus no utility. Therefore, functional interfaces are essential for leveraging lambda expressions in Java.

After Java 1.5, the ConcurrentHashMap was introduced to address performance issues associated with synchronized maps. While a regular HashMap is not thread-safe, a synchronized map, created using Collections.synchronizedMap(), ensures thread safety by synchronizing all read and write operations. However, this approach has significant performance drawbacks.

In a synchronized map, whenever a thread performs a read or write operation, it locks the entire map. This means that all threads must wait for the lock to be released before they can proceed with their operations, leading to poor performance in a multi-threaded environment.

To overcome this limitation, the ConcurrentHashMap was introduced. It employs a technique called lock stripping, which improves performance by dividing the map into segments. Each segment can be locked independently, allowing multiple threads to read and write concurrently without blocking each other.

Lock Stripping in ConcurrentHashMap

  • Read Operations: In ConcurrentHashMap, read operations are non-blocking and can be performed concurrently by multiple threads. This is because read operations do not require locking, allowing for high concurrency and minimal contention.
  • Write Operations: Write operations in ConcurrentHashMap are handled using fine-grained locking. Instead of locking the entire map, only the specific segment (or bucket) being written to is locked. This means that other segments remain accessible for read and write operations by other threads, significantly improving performance.
  • Handling Collisions: In earlier versions, when two keys are mapped to the same bucket in the HashMap (a collision), Java is used to store the values in a linked list at that bucket. However, this approach could lead to performance degradation in worst-case scenarios, where the time complexity became O(N), meaning the time taken to perform operations could increase as the number of elements in the map increased.
  • Introduction of Balanced Trees: To address this, in Java 1.8, instead of using linked lists, they implemented a balanced tree structure (specifically a red-black tree). Initially, collisions are handled by storing elements in a linked list only. Once the number of elements in a bucket exceeds a certain threshold, the linked list is converted into a balanced tree to improve performance. Using a balanced tree improves the worst-case performance to O(log N), which is much better than the previous O(N) scenario. This change significantly boosts the performance of HashMaps, especially in cases with frequent collisions.
  • Removal of alternative String Hash Functions: Additionally, Java 8 removed some alternative string hash functions that were introduced in Java 7. These alternative hash functions were removed to simplify and streamline the HashMap implementation.

In Java 8, the Optional class is a container object that may or may not contain a non-null value. It’s designed to help prevent NullPointerException and provide a more expressive way to handle situations where a value may be absent.

Before Java 8, developers often returned null to represent the absence of a value. However, this approach could lead to NullPointerException errors if not handled properly. Optional provides a cleaner and safer alternative to handling such scenarios.

Here’s a brief overview of Optional:-

  • Creating Optional Objects: You can create an Optional object using the of() method if the value is present or using empty() if the value is absent.
  • Accessing Values: You can retrieve the contained value using get(), but it’s recommended to use methods like orElse() or orElseGet() to provide default values in case the value is absent or to avoid NoSuchElementException.
  • Checking Presence: Methods like isPresent() and ifPresent() allow to check if a value is present and perform actions accordingly.
  • Chaining Operations: Optional supports method chaining, enabling you to perform operations on the contained value without exposing it to potential null values.
  • Avoiding NullPointerExceptions: By using Optional, you can write cleaner and more robust code by explicitly handling scenarios where a value may be absent, reducing the risk of NullPointerExceptions.

Here’s a simple example:-

Optional<String> optionalName = Optional.of("John"); // default value
Optional<String> optionalName1 = Optional.empty(); // empty value
String name = optionalName.orElse("Unknown");
System.out.println("Name: " + name);
optionalName.ifPresent(n -> 
      System.out.println("Name length: " + n.length()));

In this example, optionalName contains the value “John”. We retrieve the value using orElse(), providing a default value if it’s absent. Then, we use ifPresent() to act if the value is present.

map()

  • Purpose: Transforms each element of the stream into another form.
  • Output: A stream of transformed elements.
  • Example: Converting a list of strings to their lengths.

flatMap()

  • Purpose: Transforms each element into a stream and then flattens these streams into a single stream.
  • Output: A single flattened stream.
  • Example: Flattening a list of lists into a single list.
List<String> words = Arrays.asList("hello", "world");
List<Integer> lengths = words.stream()
                             .map(String::length)
                             .collect(Collectors.toList());
System.out.println(lengths); // Output: [5, 5]

List<List<String>> nestedList = Arrays.asList(
    Arrays.asList("a", "b"),
    Arrays.asList("c", "d")
);
List<String> flatList = nestedList.stream()
                                  .flatMap(List::stream)
                                  .collect(Collectors.toList());
System.out.println(flatList); // Output: [a, b, c, d]

When we implement the factory method we have a factory that creates objects while in the case of an abstract factory, you have a factory that creates another factory and those factories give objects so it is like a single level is factory methods and a double layer. The two levels of abstraction are abstract factories. So abstract factory returns you one or one kind of object while in the case of an abstract factory, each method creates another factory, and each method in turn creates a method that creates your object so this is a double layer of abstraction for you.

In the case of spring, we have multiple scopes available. So there are basically five major types of scopes in spring the first two are very much used that is:- singleton and prototype. The rest three are rarely used that is:- request scope, request session, and global session. 

  1. Singleton: This is the default scope in Spring. Only one instance of the bean is created per Spring container. Every time the bean is requested, the same instance is returned. This can lead to data inconsistency if the bean’s state is modified, as changes will be reflected across the entire application. For example, if you have a singleton bean for an Employee with ID 1 and name “RoccoJeery”, and you modify the name to “Rocco_Jeery”, this change will be visible throughout the application. That’s why came the prototype. 
  2. Prototype: Unlike the singleton scope, a new instance of the bean is created every time it is requested. This can be explicitly specified using @Scope(“prototype”). Each time you autowire the bean in a different class, a new instance is provided, ensuring that changes to one instance do not affect others.
  3. Request: This scope is similar to the prototype scope but is used mainly for web applications. A new bean instance is created for each HTTP request. For the duration of a single HTTP request, the same bean instance is used, but a new instance is created for each new request.
  4. Session: A single bean instance is created for the entire HTTP session. This bean is shared across the session and is destroyed when the session ends. A new bean instance is created for each new session.
  5. Global Session: This scope is used in portlet applications to create global session beans. It is rarely used and is specific to portlet environments.
public class Employee {
    private String name;
    // constructor
    // getter, setter, tostring
}
@Configuration
public class AppConfig {
    @Bean
    @Scope("prototype")
    public Employee employee() {
        return new Employee("Default Name");
    }
}
@Component
public class TestPrototypeBean {

    @Autowired
    private ApplicationContext context;

    public void demonstratePrototypeScope() {
        Employee emp1 = context.getBean(Employee.class);
        emp1.setName("Employee 1");

        Employee emp2 = context.getBean(Employee.class);
        emp2.setName("Employee 2");

        System.out.println(emp1); // Output: Employee [name=Employee 1]
        System.out.println(emp2); // Output: Employee [name=Employee 2]
    }
}

Spring:

  • Framework: A comprehensive framework for enterprise-level Java applications, providing infrastructure support.
  • Flexibility: Requires detailed configuration, which can be time-consuming but highly customizable.
  • Modules: Offers various modules like Spring Core, Spring MVC, Spring Security, etc., allowing for a modular approach.
  • Maturity: Well-established with extensive community support and resources.

Spring Boot:

  • Framework: Built on top of Spring, aiming to simplify the setup and development of new Spring applications.
  • Ease of Use: Comes with default configurations out-of-the-box, minimizing the setup time.
  • Standalone: Creates standalone applications with embedded servers (like Tomcat), eliminating the need for explicit deployment.
  • Autoconfiguration: Automatically configures Spring and third-party libraries, reducing boilerplate code.

In short, Spring Boot is like a supercharged Spring – geared toward reducing complexity and speeding up development without losing the power of Spring’s comprehensive features.

Spring AOP (Aspect-Oriented Programming) helps you add extra behavior to your code without changing the actual code itself. Think of it as a way to add features like logging, security, or transaction management to your methods without cluttering them.

Key Concepts:-

  • Aspect: It is a class that contains the extra methods you want to add, like logging.
  • Joinpoint: A specific point in your program, such as when a method is called.
  • Advice: The action taken by an aspect at a joinpoint. For example, logging before a method runs.
  • Pointcut: A set of joinpoints where the advice should be applied. It tells where and when to add the extra behavior.
  • Weaving: The process of applying aspects to your code to create an advised object.

Aspect is nothing but a class in terms of spring it contains all the methods. All these methods are called advice. All these methods contain some logic which is the cross-cutting concern that we need to segregate from the business logic. Now these methods are executed only on a particular condition or only when it satisfies a particular situation which is known as point cut. Now point cut when it executes, executes around something called the business point join point or something called Joinpoint so whenever a joint point is executed and if your point cut is just an expression it evaluates to that particular join point then your advice in your respect is executed.

Example:- Imagine you have a method that processes orders, and you want to log when this method starts and ends. Instead of adding logging code directly into the method, you can create an aspect for logging.

@Aspect
@Component
public class LoggingAspect {

    @Before("execution(* com.example.OrderService.processOrder(..))")
    public void logBefore(JoinPoint joinPoint) {
        System.out.println("Starting method: " + joinPoint.getSignature().getName());
    }

    @After("execution(* com.example.OrderService.processOrder(..))")
    public void logAfter(JoinPoint joinPoint) {
        System.out.println("Ending method: " + joinPoint.getSignature().getName());
    }
}

In this example:-

  • The @Aspect annotation marks the class as an aspect.
  • The @Before and @After annotations add logging before and after the processOrder method in the OrderService class.

Benefits

  • Cleaner Code: Keeps your main code free from extra stuff like logging.
  • Reusable: You can use the same aspect in different parts of your application.
  • Easy to Maintain: If you need to change the logging, you only do it in one place.
  • Spring AOP helps you keep your code neat and organized by allowing you to add extra features without mixing them with your main business logic.

Before Java 8, Java used a memory area called PermGen (Permanent Generation) to store class metadata. However, PermGen had a significant limitation: it was of fixed size and could not be resized dynamically. This often led to `OutOfMemoryError` when the space was exhausted, as it couldn’t adapt to the growing needs of the application. PermGen was part of the JVM’s heap memory, which meant it was subject to garbage collection but had constraints in terms of flexibility and scalability.

With the introduction of Java 8, MetaSpace replaced PermGen to address these issues. MetaSpace uses native memory (outside the JVM heap), allowing it to grow and shrink dynamically based on the application’s requirements. This dynamic resizing capability reduces the risk of OutOfMemoryError related to class metadata storage and improves garbage collection efficiency and overall JVM performance.

Key DifferencesPermGenMetaSpace
Memory TypePart of the JVM heap memoryUses native memory outside the JVM heap
Size ManagementFixed size, leading to potential memory issuesDynamically resizable, adapting to the application’s needs
PerformanceLimited by its fixed size and heap memory constraintsEnhanced performance due to dynamic sizing and better garbage collection

Before Java 8, if an application loaded many classes, it could run into OutOfMemoryError due to the fixed size of PermGen. With MetaSpace in Java 8 and later, the same application can handle more classes without running into memory issues, as MetaSpace can grow as needed.

In summary, MetaSpace provides a more flexible and efficient way to manage class metadata, addressing the limitations of PermGen and improving the overall performance and stability of Java applications.

In a multi-catch block, you would have a single-catch block that can handle multiple specific types of exceptions. A multi-catch block in Java is when you have a try block followed by multiple catch blocks. Each catch block handles a different type of exception. But with multi-catch blocks, you can group together multiple exceptions using a pipe operator (|). This means you can use one catch block to handle several types of exceptions, which can make your code cleaner and more concise. So, instead of having separate catch blocks for each type of exception, you can handle them all in one place using a multi-catch block.

public class MultiCatchExample {
    public static void main(String[] args) {
        try {
            // Some code that might throw exceptions
            String str = null;
            System.out.println(str.length()); // This will throw a NullPointerException
        } catch (NullPointerException | ArrayIndexOutOfBoundsException ex) {
            // Handling both NullPointerException and ArrayIndexOutOfBoundsException
            System.out.println("An exception occurred: " + ex.getMessage());
        }
    }
}

Criteria API in hibernate helps us to create criteria query objects dynamically so that criteria is yet another technique of data retrieval from the database apart from HQL and native SQL query. The advantage of criteria API is that it is used to design a method to fetch the data without having any hard-coded SQL statements. The programmatical behavior offers compile-time syntax checking so whenever you write a criteria API it is a basic object-oriented format so you will get error messages or compile-time errors if you write something wrong. Although it is convenient, there is no such advantage of performance over HQL or native SQL queries, the performance remains the same. it’s just your preference whether you want to use HQL or native SQL or criteria.

The Hibernate Criteria API offers a dynamic way to create query objects for data retrieval in Hibernate. It’s an alternative to using HQL (Hibernate Query Language) or native SQL queries.

Here are some benefits of using the Criteria API:-

  1. Dynamic Query Creation: With the Criteria API, you can dynamically create query objects without hard-coding SQL statements. This flexibility allows you to construct queries based on changing conditions or user input.
  2. Programmatical Behavior: Criteria queries are created using object-oriented syntax, which offers compile-time syntax checking. This means you’ll catch errors early on during development, making your code more reliable.
  3. No Hard-Coded SQL: Using Criteria API helps you avoid writing hard-coded SQL statements directly in your code. This can make your code more maintainable and easier to understand, especially for developers who are more comfortable with Java than SQL.
  4. Performance Similarity: While Criteria queries offer convenience and flexibility, they don’t necessarily offer performance advantages over HQL or native SQL queries. The performance typically remains similar regardless of the approach you choose.
  5. Retrieval Focus: Criteria API is often used for data retrieval operations, particularly for SELECT queries. It provides a clean and expressive way to specify the conditions for fetching data from the database.

In summary, the Hibernate Criteria API provides a flexible and object-oriented approach to constructing queries in Hibernate, offering benefits such as dynamic query creation, programmatic behavior, and improved code readability. However, its performance is similar to that of HQL or native SQL queries, making it a matter of preference for developers.

Whenever we try to persist on an entity that entity might have different entities in it. For example, if I have an employee object and it has an address embedded into it if my employee persists what will happen to my address class so will that also persist, or what will happen to it? These are the cascade operations that JPA provides us. There are basically six types of cascade operations:-

  1. Persist: When an entity is persisted, all related entities (such as embedded objects) will also be persisted. So, if an employee object is persisted, its related entities like address will also be saved in the database.
  2. Merge: If an entity is merged, all related entities will also be merged. This means changes made to the employee object, including its related address, will be updated in the database.
  3. Detach: When an entity is detached from the persistence context, its related entities are also detached. This means the address object will no longer be managed by the persistence context.
  4. Refresh: If an entity is refreshed, all related entities will also be refreshed. This ensures that the data in the address object is synchronized with the database. If the Employee entity is refreshed, which means any unsaved changes (like the temporary name change) are discarded, and the entity’s state is synchronized with the database.
  5. Remove: When an entity is removed, its related entities will also be removed. So, if the employee object is deleted, the associated address object will also be deleted from the database.
  6. All: The “all” cascade type applies all of the above cascade operations to the related entities. This means any operation performed on the employee object will also be applied to its related address object.

Microservices is an architectural style where an application is divided into small, independent services, each responsible for a specific functionality. This approach contrasts with the traditional monolithic architecture, where the entire application is built as a single, interconnected unit.

Why Use Microservices?

  • Independence: Each microservice operates independently. If one service fails, it doesn’t bring down the entire application. This isolation improves the overall resilience of the system.
  • Increased Resilience: By decentralizing the application into multiple services, the impact of a single service failure is minimized. This fault tolerance ensures that the application remains operational even if some components fail.
  • Improved Scalability: Microservices allow individual services to be scaled independently based on their specific needs. For example, a business-critical service can be scaled across multiple servers without affecting other parts of the application. In contrast, a monolithic application requires scaling the entire application, which can be inefficient and costly.
  • Technology Flexibility: Microservices enable the use of different technologies and tools for different services. For instance, one service might be best implemented in Java, while another might be more suitable for a different technology. This flexibility allows teams to choose the best tool for each task.
  • Faster Time to Market: Because microservices are loosely coupled, development teams can work on different services simultaneously. Each team member can focus on a specific microservice, speeding up the development process. Once all services are ready, they can be deployed independently, reducing the time to market.
  • Easier Debugging and Maintenance: Debugging and maintaining microservices is more straightforward because each service is smaller and focused on a specific functionality. If a bug is found in a particular microservice, only the team responsible for that service needs to address it, allowing other teams to continue their work without interruption.

In summary:- Microservices offer several advantages over monolithic architectures, including improved resilience, scalability, flexibility, faster development, and easier maintenance. These benefits make microservices a popular choice for modern application development, enabling teams to build robust, scalable, and maintainable applications.

ClassNotFoundExceptionNoClassDefFoundError
It is an exception. It is of type java.lang.Exception.It is an error. It is of type java.lang.Error.
It occurs when an application tries to load a class at runtime which is not updated in the classpath.It occurs when the Java runtime system doesn’t find a class definition, which is present at compile time, but missing at run time.
It is thrown by the application itself. It is thrown by the methods like Class.forName(), loadClass(), and findSystemClass().It is thrown by the Java Runtime System.
It occurs when the classpath is not updated with the required JAR files.It occurs when required class definition is missing at runtime.
public class ClassNotFoundExceptionDemo {
    public static void main(String[] args) {
        try {
            Class.forName("com.mysql.jdbc.Driver");
        } catch (ClassNotFoundException e) {
            e.printStackTrace();
        }
    }
}

Exception:-

public class Temp {
    int x = 1;
}

public class NoClassDefFoundErrorDemo {
    public static void main(String[] args) {
        Temp temp = new Temp();
        System.out.println(temp.x);
    }
}

Compile the program while will generate the .class file for both NoClassDefFoundErrorDemo and Temp classes:-

javac NoClassDefFoundErrorDemo.java

If we run the NoClassDefFoundErrorDemo.class then it will give a valid result:-

java NoClassDefFoundErrorDemo

Output:-

Now, delete the generated Temp.class file and again try to run the NoClassDefFoundErrorDemo.class file:-

java NoClassDefFoundErrorDemo

Error:-

This time JVM won’t be able to find the Temp.class file therefore it gave an error message.

We can use any immutable class as a key. Java has already provided many immutable classes like String, and Wrapper classes.

Therefore java.lang.Integerjava.lang.Bytejava.lang.Characterjava.lang.Shortjava.lang.Booleanjava.lang.Longjava.lang.Doublejava.lang.Float are natural candidates for Map Key.

  • In Java, Strings are immutable and are stored in the String pool. What this means is that, once a String is created, it stays in the pool in memory until being garbage collected. Therefore, even after you’re done processing the string value (e.g., the password), it remains available in memory for an indeterminate period thereafter (again, until being garbage collected) which you have no real control over. 
  • Therefore, anyone having access to a memory dump can potentially extract the sensitive data and exploit it.
  • In contrast, if you use a mutable object like a character array, for example, to store the value, you can set it to blank once you are done with it with confidence that it will no longer be retained in memory.

Java Virtual Machine (JVM) is a specification that provides a runtime environment in which Java bytecode(.class files) can be executed. The JVM acts as a “virtual” machine or processor. Java’s platform independence consists mostly of its Java Virtual Machine (JVM). JVM makes this possible because it is aware of the specific instruction lengths and other particularities of the platform (Operating System).

The JVM is not platform-independent. Java Virtual Machine (JVM) provides the environment to execute the java file ( .class file). So in the end it depends on the kernel and the kernel differs from OS (Operating System) to OS. The JVM is used to both translate the bytecode into the machine language for a particular computer and actually execute the corresponding machine-language instructions as well.

The Java ClassLoader is a part of the Java Runtime Environment that dynamically loads Java classes into the Java Virtual Machine. Java code is compiled into the class file by javac compiler and JVM executes the Java program, by executing byte codes written in the class file. ClassLoader is responsible for loading class files from the file system, network, or any other source.

Types of ClassLoader:-

  • Bootstrap Class Loader: It loads standard JDK class files from rt.jar and other core classes. It loads class files from jre/lib/rt.jar. For example, java.lang package class.
  • Extensions Class Loader: It loads classes from the JDK extensions directly usually JAVA_HOME/lib/ext directory or any other directory as java.ext.dirs.
  • System Class Loader: It loads application-specific classes from the CLASSPATH environment variable. It can be set while invoking the program using -cp or classpath command line options.
  • Custom ClassLoader: Developers can create their own ClassLoader by extending java.lang.ClassLoader. Useful for loading classes from unconventional sources or for implementing custom loading strategies.

An exception is first thrown from the top of the stack and if it is not caught, it drops down the call stack to the previous method, If not caught there, the exception again drops down to the previous method, and so on until they are caught or until they reach the very bottom of the call stack. This is called exception propagation.

If you enjoyed this post, share it with your friends. Do you want to share more information about the topic discussed above or do you find anything incorrect? Let us know in the comments. Thank you!

Leave a Comment

Your email address will not be published. Required fields are marked *