➤ How to Code a Game
➤ Array Programs in Java
➤ Java Inline Thread Creation
➤ Java Custom Exception
➤ Hibernate vs JDBC
➤ Object Relational Mapping
➤ Check Oracle DB Size
➤ Check Oracle DB Version
➤ Generation of Computers
➤ XML Pros & Cons
➤ Git Analytics & Its Uses
➤ Top Skills for Cloud Professional
➤ How to Hire Best Candidates
➤ Scrum Master Roles & Work
➤ CyberSecurity in Python
➤ Protect from Cyber-Attack
➤ Solve App Development Challenges
➤ Top Chrome Extensions for Twitch Users
➤ Mistakes That Can Ruin Your Test Metric Program
Interview Questions for Java Developer with 3 Years of Experience | It covers Core Java, Spring, Maven, Git, and Stream API scenario-based questions. Also see:- Most Commonly Asked Java Interview Questions
1. What is the inner working of the Java memory model, and how does it relate to garbage collection?
The Java Memory Model (JMM) specifies how the Java Virtual Machine (JVM) works with memory and how threads interact through memory. It defines the rules for read and write operations, visibility of variables, and ensures memory consistency across different threads, thereby supporting concurrency.
Key Components:-
- Heap: The runtime data area from which memory for all class instances and arrays is allocated. It’s managed by the Garbage Collector.
- Young Generation: Newly created objects; includes Eden Space and Survivor Spaces (S0 and S1).
- Old Generation (Tenured): Objects that survived multiple garbage collection cycles.
- Permanent Generation (Metaspace): Metadata like class information, method information, and static variables.
- Stack: Stores frames, which contain local variables and partial results. It’s thread-specific and each method call creates a new frame.
- Program Counter (PC) Register: Contains the address of the current instruction being executed. Each thread has its own PC register.
- Native Method Stack: Holds native method information.
Relation to Garbage Collection:- Garbage Collection (GC) is an automatic process that manages memory, reclaiming memory used by objects that are no longer reachable in the application. Key steps include:-
- Marking: Identifies which objects are reachable.
- Deletion: Removes unreachable objects and reclaims memory.
- Compaction: Optionally reorganizes remaining objects to reduce fragmentation.
2. In a multithreaded environment, Multiple threads are updating a field of the same object. How would you ensure that these updates are thread-safe?
To ensure thread safety when multiple threads update the same field, we can use the AtomicInteger class from the java.util.concurrent.atomic
package. It provides atomic operations that are thread-safe without needing synchronized blocks.
import java.util.concurrent.atomic.AtomicInteger;
public class Counter {
private AtomicInteger count = new AtomicInteger(0);
public void increment() {
count.incrementAndGet();
}
public int getCount() {
return count.get();
}
public static void main(String[] args) {
Counter counter = new Counter();
// Create multiple threads to increment the counter
for (int i = 0; i < 10; i++) {
new Thread(counter::increment).start();
}
// Allow some time for threads to finish
try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); }
System.out.println("Final count: " + counter.getCount());
}
}
In addition to AtomicInteger, the java.util.concurrent.atomic package includes several other atomic classes designed for thread-safe operations:- AtomicLong, AtomicBoolean, AtomicReference, AtomicIntegerArray, AtomicLongArray, and AtomicReferenceArray.
Alternatively, we can use the synchronized keyword or ReentrantLock for more complex synchronization needs, but AtomicInteger provides a more efficient and cleaner solution for simple atomic operations.
3. You are tasked with reducing memory usage in Java applications, what strategies would you use to identify and fix memory leaks?
Reducing memory usage and fixing memory leaks in a Java application involves a few key strategies:-
- Use Profiling Tools: Tools like VisualVM, YourKit, or Eclipse Memory Analyzer can help you identify memory usage patterns and pinpoint memory leaks. They provide heap dumps and analysis to track down problematic objects.
- Check for Unused Objects: Ensure that objects are set to null when no longer needed, making them eligible for garbage collection.
- Use Weak References: For large objects, consider using WeakReference or SoftReference to allow garbage collection when memory is tight.
- Avoid Memory-Heavy Data Structures: Opt for memory-efficient data structures. For example, prefer ArrayList over LinkedList if random access is needed, and use HashMap or HashSet wisely.
- Review Static References: Static fields can cause memory leaks if they hold references to large objects, as they persist for the life of the application.
- Efficiently Handle Streams and Connections: Ensure that streams, connections, and other I/O resources are properly closed after use.
- Optimize Object Lifecycle: Review object creation and destruction patterns to ensure that objects do not live longer than necessary.
- Analyze Finalizers: Avoid using finalizers; they can delay garbage collection. Use try-with-resources for resource management instead.
- Replace String Manipulation with StringBuilder/StringBuffer: If the string has multiple manipulation operations then in place of string we can use StringBuilder/StringBuffer.
Identifying and fixing memory leaks is an ongoing process. Regular profiling and code reviews can keep your application running smoothly and efficiently.
4. Explain the changes to the Java concurrency model introduced in Java 8.
Java 8 introduced several enhancements to the concurrency model, making it more powerful and flexible:
- CompletableFuture: A versatile enhancement over Future, providing methods to create asynchronous pipelines, and handle results and exceptions more efficiently.
- Parallel Streams: Allows effortless parallel processing of collections with the Stream API by simply calling .parallelStream(), automatically handling thread management.
- New Methods in Existing Classes: Introduction of methods like forEach in ConcurrentHashMap for parallel processing and computeIfAbsent to handle concurrent access patterns efficiently.
- StampedLock: A new kind of lock that improves upon ReadWriteLock, offering three modes: write lock, read lock, and optimistic read, enhancing performance in multi-threaded scenarios.
5. Explain the mechanism behind the Spring Boot autoconfiguration.
Spring Boot’s autoconfiguration is a powerful feature that simplifies the setup and configuration of Spring applications by automatically configuring beans based on the dependencies present on the classpath. Here’s how it works:
- Dependency Detection: Spring Boot scans the classpath for dependencies and uses this information to determine which autoconfigurations to apply.
- Conditional Configuration: Autoconfiguration classes are annotated with @Conditional annotations, which allow Spring Boot to conditionally enable or disable configurations based on the presence or absence of specific classes or beans.
- Starter Projects: Spring Boot provides starter projects (e.g., spring-boot-starter-web) that include a set of pre-configured dependencies and autoconfigurations tailored for specific use cases.
- Custom Autoconfigurations: Developers can create their own autoconfigurations by implementing @Configuration classes annotated with @AutoConfiguration. These custom autoconfigurations can be bundled in external JARs and still be picked up by Spring Boot.
Example: If we add the spring-boot-starter-web dependency, Spring Boot will automatically configure a web server (like Tomcat or Jetty), set up Spring MVC, and configure other web-related beans.
Benefits:-
- Simplified Setup: Reduces boilerplate code and setup time.
- Flexibility: Easily replace or override autoconfigurations with custom configurations.
- Consistency: Ensures consistent configuration across different environments.
Spring Boot’s autoconfiguration makes it easier to focus on writing business logic rather than dealing with framework setup.
6. How does Spring Boot support Internationalization in Microservices?
Spring Boot supports internationalization (i18n) in microservices by providing a comprehensive set of tools and features to handle multiple languages and locales seamlessly. Here’s how it works:-
- MessageSource: Spring Boot uses MessageSource to manage localized messages. By default, it looks for properties files like
messages.properties
for the default locale andmessages_<locale>.properties
for specific locales. - Locale Resolver: Spring Boot provides LocaleResolver to determine the current locale. Common implementations include SessionLocaleResolver and CookieLocaleResolver.
- Thymeleaf Integration: When using Thymeleaf for templating, Spring Boot automatically supports i18n by binding messages to the templates.
- Autoconfiguration: Spring Boot autoconfigures the necessary beans for i18n based on the dependencies present on the classpath.
@Configuration
public class AppConfig {
@Bean
public LocaleResolver localeResolver() {
SessionLocaleResolver slr = new SessionLocaleResolver();
slr.setDefaultLocale(Locale.US);
return slr;
}
@Bean
public MessageSource messageSource() {
ResourceBundleMessageSource messageSource = new ResourceBundleMessageSource();
messageSource.setBasename("messages");
messageSource.setDefaultEncoding("UTF-8");
return messageSource;
}
}
7. How does Spring Handle Bean Lifecycle?
Spring manages the lifecycle of beans in a highly structured and customizable way, ensuring that beans are properly initialized, configured, and destroyed. Here’s an overview:
- Instantiation: Spring creates an instance of the bean, either through constructor injection or a factory method.
- Dependency Injection: Dependencies are injected into the bean, using setter methods, constructor arguments, or field injection.
- Initialization:
- @PostConstruct: A method annotated with @PostConstruct will be called after the bean’s properties are set.
- InitializingBean: Implement the afterPropertiesSet() method for custom initialization logic.
- Custom Init Method: Specify a custom init method using the init-method attribute in XML or @Bean(initMethod = “init”) in Java configuration.
- Use: The bean is now ready for use by the application.
- Destruction:
- @PreDestroy: A method annotated with @PreDestroy will be called before the bean is destroyed.
- DisposableBean: Implement the destroy() method for custom destruction logic.
- Custom Destroy Method: Specify a custom destroy method using the destroy-method attribute in XML or @Bean(destroyMethod = “destroy”) in Java configuration.
8. How can we implement custom scope in Spring Beans?
Implementing custom scope in Spring Beans allows you to define and manage bean lifecycles in ways that suit your application. Here’s how you can do it:
- Define a Custom Scope: Implement the Scope interface to create a custom scope.
- Register the Custom Scope: Register your custom scope with the Spring container using the CustomScopeConfigurer.
- Use the Custom Scope: Use your custom scope in bean definitions.
Example: Define the Custom Scope:
import org.springframework.beans.factory.ObjectFactory;
import org.springframework.beans.factory.config.Scope;
import java.util.HashMap;
import java.util.Map;
public class CustomScope implements Scope {
private Map<String, Object> scopedObjects = new HashMap<>();
@Override
public Object get(String name, ObjectFactory<?> objectFactory) {
return scopedObjects.computeIfAbsent(name, k -> objectFactory.getObject());
}
@Override
public Object remove(String name) {
return scopedObjects.remove(name);
}
@Override
public void registerDestructionCallback(String name, Runnable callback) {
// No-op
}
@Override
public Object resolveContextualObject(String key) {
return null;
}
@Override
public String getConversationId() {
return "custom";
}
}
- Register the Custom Scope:
import org.springframework.beans.factory.config.CustomScopeConfigurer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.util.HashMap;
import java.util.Map;
@Configuration
public class AppConfig {
@Bean
public static CustomScopeConfigurer customScopeConfigurer() {
CustomScopeConfigurer configurer = new CustomScopeConfigurer();
Map<String, Object> scopes = new HashMap<>();
scopes.put("custom", new CustomScope());
configurer.setScopes(scopes);
return configurer;
}
}
- Use the Custom Scope:
import org.springframework.context.annotation.Scope;
import org.springframework.stereotype.Component;
@Component
@Scope("custom")
public class CustomScopedBean {
// Bean logic here
}
9. Describe a scenario where you would use Bean Factory over ApplicationContext.
Using BeanFactory over ApplicationContext is generally recommended in lightweight applications where memory consumption and performance are crucial considerations.
Imagine you’re developing a small, resource-constrained IoT application that runs on an embedded system with limited memory and processing power. In this context, the leaner and more efficient BeanFactory is a better choice than the more feature-rich ApplicationContext.
Why BeanFactory?
- Lightweight: BeanFactory doesn’t pre-instantiate beans, thus consuming less memory.
- Lazy Loading: Only creates beans when they are requested, which is ideal for applications where not all components are always needed.
- Reduced Overhead: Lacks the extended features of ApplicationContext, such as event propagation, AOP, and internationalization, which aren’t necessary for all applications.
In a minimal IoT application where you only need basic DI and not the advanced features provided by ApplicationContext, BeanFactory would be more than sufficient. You might not require the overhead of preloading all beans or the extra functionality provided by ApplicationContext.
10. You need to improve the performance of the Spring Data JPA-based application that is experiencing slow query time. What strategies would you use?
Boosting the performance of a Spring Data JPA application involves a few key strategies:
- Optimize Queries:
- Use native queries for complex operations.
- Ensure indexes are properly used and maintained.
- Avoid fetching unnecessary data with appropriate select queries.
- Cache Results:
- Leverage second-level caching with providers like EhCache or Hazelcast.
- Use query caching to avoid frequent database hits for the same data.
- Batch Processing:
- Utilize batch processing to reduce the overhead of individual transactions.
- Use @Modifying annotation for bulk updates/deletes.
- Fetch Strategies:
- Use LAZY fetching for associations to avoid loading unnecessary data.
- Adjust fetch plans to balance data retrieval and performance.
- Profiling Tools: Use tools like Hibernate Profiler or Spring Boot Actuator to monitor and profile slow queries.
11. Difference between the second-level cache and the Redis cache?
Second-Level Cache:
- Scope: Limited to the application level; it caches entity data at the session factory level within a single application instance.
- Implementation: Typically provided by JPA providers like Hibernate, often using providers like EhCache or Infinispan.
- Purpose: Reduces database hits by caching entities across sessions within the same application instance.
- Persistence: Not persistent; data is lost when the application is restarted.
Redis Cache:
- Scope: Network-level cache; can be accessed by multiple applications and instances.
- Implementation: External, in-memory data store using Redis, which supports data structures like strings, hashes, lists, and sets.
- Purpose: High-performance caching and more, providing features like distributed data storage and real-time analytics.
- Persistence: Optional persistence; Redis can be configured to persist data to disk and recover it after restarts.
Each serves its purpose, and Redis often complements the second-level cache for broader caching needs.
12. Difference between Git Rebase and Git Merge?
Git Rebase is used to integrate changes from one branch into another by moving the base of the branch to a new starting point. It rewrites the commit history to create a linear sequence of commits, which can make the project history cleaner and easier to follow. During a rebase, conflicts must be resolved as they occur, which can be more immediate but might require more careful handling of changes. Rebase is particularly useful when you want to maintain a linear project history, making it look as if the changes were implemented sequentially. The command for rebase is git rebase branch_name
.
Git Merge, on the other hand, integrates changes from one branch into another by creating a new merge commit. Unlike rebase, merge preserves the commit history, resulting in a non-linear history that retains the original branch structure. This can be helpful to maintain the context of historical changes and understand the branching structure over time. Conflicts must be resolved when the merge is performed. Merge is generally used when you want to keep the original branching context intact. The command for the merge is git merge branch_name
.
In summary, rebase creates a linear history by rewriting commits, making the project history clean, while merge retains the original branching history, providing a more complete view of how the project has evolved. Choose the method based on your project’s needs and how you want to visualize its history.
Feature | Git Rebase | Git Merge |
---|---|---|
Purpose | Reapply commits on top of another base tip | Combine multiple sequences of commits into one history |
History | Creates a linear, cleaner project history | Preserves the complete history, including branch merges |
Commit History | Commits are replayed on the target branch | Commits from both branches are retained as they were |
Conflict Resolution | Conflicts resolved during each replayed commit | Conflicts resolved at the time of merge |
Usage | Used for clean and linear commit history | Used for preserving historical context and merge points |
Risk | Can rewrite commit history, which can be risky | Does not rewrite commit history |
Commands | git rebase branchname | git merge branchname |
13. What are Maven Build Life Cycle?
The Maven Build Life Cycle consists of a series of well-defined phases that represent the steps in the build process for a project. Each phase performs a specific task in the build process, ensuring that the project is built correctly. Here’s a breakdown of the main life cycle phases:
Clean Life Cycle:
- pre-clean: Perform tasks before cleaning.
- clean: Remove all files generated by the previous build.
- post-clean: Perform tasks after cleaning.
Default Life Cycle:
- validate: Validate the project is correct and all necessary information is available.
- compile: Compile the source code.
- test: Test the compiled source code using a suitable testing framework.
- package: Package the compiled code into a distributable format, like a JAR or WAR.
- verify: Run any checks to verify the package is valid and meets quality criteria.
- install: Install the package into the local repository for use as a dependency in other projects.
- deploy: Deploy the package to a remote repository for sharing with other developers.
Site Life Cycle:
- pre-site: Perform tasks before generating the site documentation.
- site: Generate the project’s site documentation.
- post-site: Perform tasks after site generation and before site deployment.
- site-deploy: Deploy the generated site documentation to a web server.
14. How would you manage multi-module Maven projects and their dependencies?
Managing multi-module Maven projects involves organizing your project into multiple sub-modules that share a common build configuration. Here’s how you can do it:
- Project Structure:
- Parent Project: It contains the main pom.xml file. It manages shared dependencies and plugins.
- Sub-Modules: Each sub-module has its own pom.xml. These inherit configurations from the parent pom.xml.
- Parent pom.xml:
Set up a parent POM to manage common configurations and dependencies.
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>parent-project</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>pom</packaging>
<modules>
<module>module1</module>
<module>module2</module>
</modules>
<dependencyManagement>
<dependencies>
<!-- Common dependencies for all modules -->
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>5.3.6</version>
</dependency>
</dependencies>
</dependencyManagement>
</project>
- Sub-Module pom.xml:
Each sub-module defines its specific dependencies while inheriting common configurations from the parent.
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.example</groupId>
<artifactId>parent-project</artifactId>
<version>1.0-SNAPSHOT</version>
</parent>
<artifactId>module1</artifactId>
<dependencies>
<!-- Module-specific dependencies -->
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>${spring.version}</version>
</dependency>
</dependencies>
</project>
15. How would you optimize Maven build speed for large projects?
Optimizing Maven build speed for large projects involves several strategies:
- Parallel Builds: Enable parallel builds by setting -T option (
mvn -T 1C install
) to utilize multiple CPU cores. Here 1C represents 1 core. - Dependency Management: Avoid unnecessary dependencies and ensure dependency versions are up-to-date. Use the dependencyManagement section in the parent POM to manage versions consistently.
- Incremental Builds: Use the build-helper-maven-plugin to mark only changed files for recompilation. Enable incremental compilation with maven-compiler-plugin by setting
<useIncrementalCompilation> true </useIncrementalCompilation>
. - Efficient Plugins Configuration: Configure plugins to minimize overhead. For instance, limit the scope of testing to relevant modules only (surefire or failsafe).
- Profile Management: Use Maven profiles to build only specific parts of the project when needed (
mvn -P install
). - Local Repository Optimization: Use a local Maven repository mirror (like Nexus or Artifactory) to speed up dependency resolution.
- Reduce Logging: Lower the log level to reduce console output overhead (
mvn install -q
). - Skip Unnecessary Goals: Skip goals not needed for every build, like tests or documentation generation (
mvn install -DskipTests
).
16. For a given array use Stream API to find the sum of elements at odd indexes.
List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5);
int sum = IntStream.range(0, numbers.size())
.filter(i -> i % 2 != 0)
.map(i -> numbers.get(i))
.sum();
System.out.println(sum);
17. Use the stream API to find the factorial of a given number.
int n = 5;
long factorial = IntStream.rangeClosed(1, n).reduce(1, (a, b) -> a * b);
System.out.println(factorial);
18. Find the sum of odd numbers in a list using Stream API.
import java.util.Arrays;
import java.util.List;
public class Test {
public static void main(String[] args) {
List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5);
int sumOfOddNumbers = numbers.stream()
.filter(n -> n % 2 != 0)
.reduce((a, b) -> a + b).get();
System.out.println(sumOfOddNumbers); // 9
}
}
19. Find the sum of squares of odd numbers in a list using Stream API.
import java.util.Arrays;
import java.util.List;
public class Test {
public static void main(String[] args) {
List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5);
int sumOfSquareOfOddNumbers = numbers.stream()
.filter(n -> n % 2 != 0)
.map(n -> n*n)
.reduce((a, b) -> a + b).get();
System.out.println(sumOfSquareOfOddNumbers); // 35
}
}
20. What is the status code for Successfully deleted?
The typical HTTP status code for a successful deletion is:
- 204 No Content: Indicates that the server has successfully processed the request and there is no further content to return.
NO_CONTENT(204, Series.SUCCESSFUL, "No Content")
Alternatively, some APIs might return:
- 200 OK: If they want to include a response body confirming the deletion (e.g.,
{ "message": "Resource deleted successfully" }
).
Using 204 No Content is generally recommended for DELETE operations since it implies the operation was successful without returning any content.
21. What are the different ways through which two microservices can communicate?
There are several ways for two microservices to communicate, and the choice depends on factors such as latency, data consistency, system coupling, and scalability. Microservices communication can be classified into two types:
- Synchronous Communication
- Asynchronous Communication
Use RestTemplate
(deprecated) or WebClient
/ OpenFeign
for synchronous communication:
- These tools are ideal for making HTTP-based request-response calls between microservices.
OpenFeign
simplifies REST API calls with declarative syntax and integrates easily with Spring Boot.- Example use case: Fetching user details from a
user-service
in real-time.
Use Kafka or RabbitMQ for asynchronous communication:
- These message brokers are ideal for event-driven systems where real-time response isn’t required, but decoupling is essential.
- Example use case: Publishing an “order placed” event to notify other services without waiting for them to respond immediately.
22. How to apply Indexing in Spring Data JPA?
In Spring Data JPA, you can create indexes for database tables using the @Index
annotation inside the @Table
annotation. This ensures better query performance, especially when your queries often involve certain columns.
1. Add Database Index Using the @Index
Annotation:- The @Index
annotation is used within the @Table
annotation to specify one or more indexes for a JPA entity.
import jakarta.persistence.*;
@Entity
@Table(name = "users", indexes = {
@Index(name = "idx_username", columnList = "username", unique = true),
@Index(name = "idx_created_at", columnList = "createdAt")
})
public class User {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Column(nullable = false, unique = true)
private String username;
@Column(nullable = false)
private String email;
@Column(name = "createdAt", nullable = false)
private LocalDateTime createdAt;
// Getters and setters
}
Explanation:
@Index
: Defines an index on a column or combination of columns.name
: The name of the index.columnList
: A comma-separated list of columns for the index.unique
: (Optional) Makes the index unique.
This example creates:
- A unique index on the
username
column. - A regular index on the
createdAt
column.
2. Composite Index (Multiple Columns)
If you need an index on multiple columns, you can specify them in columnList
.
Example: Composite Index on username
and email
@Table(name = "users", indexes = {
@Index(name = "idx_username_email", columnList = "username, email")
})
public class User {
// Fields and methods
}
With the indexed columns in place, your queries can now run more efficiently. For example:
@Query("SELECT u FROM User u WHERE u.username = :username")
User findByUsername(@Param("username") String username);
You can manually verify that the indexes are created by running a query in your database. For example, in MySQL:-
SHOW INDEX FROM users;
23. Swap numbers with and without using temporary variables.
Swap two numbers using a temporary variable.
int a = 5, b = 9;
int temp = a;
a = b;
b = temp;
System.out.println(a + " " + b);
Swap two numbers without using a temporary variable.
// swap without using temporary variable
int a = 5, b = 9;
a = a ^ b;
b = a ^ b;
a = a ^ b;
System.out.println(a + " " + b);
24. From a given String extract all the dollar amount.
Sample string: “1 Rental $70,000Shopping $299. Expenses $800. House$2,00,000”
import java.util.ArrayList;
import java.util.List;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class Test {
public static void main(String[] args) {
String text = "1 Rental $70,000Shopping $299. Expenses $800. House$2,00,000";
List<String> dollarAmounts = extractDollarAmounts(text);
System.out.println("Dollar amounts: " + dollarAmounts);
}
private static List<String> extractDollarAmounts(String text) {
List<String> dollarAmounts = new ArrayList<>();
Pattern pattern = Pattern.compile("\\$\\d{1,3}(,\\d{2,3})*(\\.\\d{1,2})?");
Matcher matcher = pattern.matcher(text);
while (matcher.find()) {
dollarAmounts.add(matcher.group());
}
return dollarAmounts;
}
}
Pattern: \\$\\d{1,3}(,\\d{2,3})*(\\.\\d{1,2})?
\\$
: Matches the dollar sign.\\d{1,3}
: Matches one to three digits.(,\\d{2,3})*
: Matches groups of two to three digits separated by commas.(\\.\\d{1,2})?
: Matches an optional decimal part with one or two digits.
25. How to implement spring security in the spring boot application?
Implementing Spring Security in a Spring Boot application involves configuring security settings and adding dependencies.
- Add
spring-boot-starter-security
dependency in pom.xml - Create a class extending WebSecurityConfigurerAdapter to configure security settings.
- Use @EnableWebSecurity annotation to enable Spring Security.
- Define authentication and authorization rules in the configure(HttpSecurity http) method.
If you enjoyed this post, share it with your friends. Do you want to share more information about the topic discussed above or do you find anything incorrect? Let us know in the comments. Thank you!