# Critical Algorithms and Data Structures in Java

Binary search is a fundamental algorithm used to efficiently search for a specific element in a sorted array or list. It follows a divide and conquer approach, repeatedly dividing the search space in half until the target element is found or determined to be not present.

The key idea behind binary search is to compare the target element with the middle element of the array. If the target element is equal to the middle element, the search is successful. If the target element is smaller, the search is performed on the left half of the array. If the target element is larger, the search is performed on the right half of the array. This process is repeated until the target element is found or the search space is empty.

Here is a Java implementation of the binary search algorithm:

```public class BinarySearch {
public static int binarySearch(int[] arr, int target) {
int left = 0;
int right = arr.length - 1;

while (left <= right) {
int mid = left + (right - left) / 2;

if (arr[mid] == target) {
return mid;
} else if (arr[mid] < target) {
left = mid + 1;
} else {
right = mid - 1;
}
}

return -1;
}

public static void main(String[] args) {
int[] arr = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
int target = 6;

int result = binarySearch(arr, target);

if (result == -1) {
} else {
System.out.println("Element found at index " + result);
}
}
}
```

In this example, we have an array `arr` containing elements from 1 to 10. We are searching for the target element 6 using the `binarySearch` method. If the element is found, the index is returned. Otherwise, -1 is returned. In this case, the output will be “Element found at index 5”.

Binary search has a time complexity of O(log n), where n is the number of elements in the array. This makes it a very efficient algorithm for searching in large sorted arrays.

## Merge Sort

Merge sort is a popular sorting algorithm that follows a divide and conquer approach. It involves dividing the input array into smaller subarrays, sorting them recursively, and then merging the sorted subarrays to produce the final sorted array.

The key idea behind merge sort is to repeatedly divide the array in half until each subarray contains only one element. Then, the merging process takes place, where the sorted subarrays are combined to produce a larger sorted array. This process is repeated until the entire array is sorted.

Here is a Java implementation of the merge sort algorithm:

```public class MergeSort {
public static void mergeSort(int[] arr) {
if (arr.length <= 1) {
return;
}

int mid = arr.length / 2;
int[] left = new int[mid];
int[] right = new int[arr.length - mid];

for (int i = 0; i < mid; i++) {
left[i] = arr[i];
}

for (int i = mid; i < arr.length; i++) {
right[i - mid] = arr[i];
}

mergeSort(left);
mergeSort(right);
merge(arr, left, right);
}

private static void merge(int[] arr, int[] left, int[] right) {
int i = 0;
int j = 0;
int k = 0;

while (i < left.length && j < right.length) {
if (left[i] <= right[j]) {
arr[k++] = left[i++];
} else {
arr[k++] = right[j++];
}
}

while (i < left.length) {
arr[k++] = left[i++];
}

while (j < right.length) {
arr[k++] = right[j++];
}
}

public static void main(String[] args) {
int[] arr = {5, 2, 9, 1, 7, 6, 3};

mergeSort(arr);

System.out.println("Sorted array: " + Arrays.toString(arr));
}
}
```

In this example, we have an unsorted array `arr` containing integers. We are sorting the array using the `mergeSort` method. The sorted array is then printed using the `Arrays.toString` method. In this case, the output will be “Sorted array: [1, 2, 3, 5, 6, 7, 9]”.

Merge sort has a time complexity of O(n log n), where n is the number of elements in the array. It is a stable sorting algorithm, meaning that the relative order of equal elements is preserved. This makes it useful in many scenarios, such as sorting linked lists or objects based on multiple keys.

## Hash Table

A hash table, also known as a hash map, is a data structure that allows for efficient insertion, deletion, and retrieval of key-value pairs. It uses a hash function to compute an index into an array of buckets, where each bucket can hold multiple key-value pairs.

The key idea behind a hash table is to store the key-value pairs in an array, where the index of each pair is determined by applying a hash function to the key. The hash function converts the key into an integer, which is then used as the array index. This allows for constant-time access to the value associated with a given key.

Here is a Java implementation of a hash table using separate chaining for collision resolution:

```import java.util.ArrayList;

public class HashTable<K, V> {
private int capacity;
private int size;

public HashTable(int capacity) {
this.capacity = capacity;
this.size = 0;
this.buckets = new ArrayList<>(capacity);

for (int i = 0; i < capacity; i++) {
}
}

public V get(K key) {
int index = getIndex(key);

for (Entry<K, V> entry : bucket) {
if (entry.getKey().equals(key)) {
return entry.getValue();
}
}

return null;
}

public void put(K key, V value) {
int index = getIndex(key);

for (Entry<K, V> entry : bucket) {
if (entry.getKey().equals(key)) {
entry.setValue(value);
return;
}
}

size++;
}

public void remove(K key) {
int index = getIndex(key);

for (Entry<K, V> entry : bucket) {
if (entry.getKey().equals(key)) {
bucket.remove(entry);
size--;
return;
}
}
}

public boolean containsKey(K key) {
int index = getIndex(key);

for (Entry<K, V> entry : bucket) {
if (entry.getKey().equals(key)) {
return true;
}
}

return false;
}

public int size() {
return size;
}

private int getIndex(K key) {
int hashCode = key.hashCode();
return Math.abs(hashCode) % capacity;
}

private static class Entry<K, V> {
private K key;
private V value;

public Entry(K key, V value) {
this.key = key;
this.value = value;
}

public K getKey() {
return key;
}

public V getValue() {
return value;
}

public void setValue(V value) {
this.value = value;
}
}

public static void main(String[] args) {
HashTable<String, Integer> table = new HashTable<>(10);

table.put("one", 1);
table.put("two", 2);
table.put("three", 3);

System.out.println(table.get("two")); // Output: 2

table.remove("two");

System.out.println(table.containsKey("two")); // Output: false
}
}
```

In this example, we have implemented a hash table using an array of linked lists as the underlying data structure. We have provided methods for getting, putting, removing, and checking the presence of a key in the hash table. The hash function used here is the built-in `hashCode` function of the key objects.

The `HashTable` class is parameterized by two types, `K` and `V`, representing the key and value types, respectively. The `Entry` class is a private inner class used to store the key-value pairs.

Hash tables have an average case time complexity of O(1) for insertion, deletion, and retrieval operations. However, in the worst case, when there are many collisions, the time complexity can degrade to O(n), where n is the number of key-value pairs in the hash table. Therefore, it is important to choose a good hash function and handle collisions appropriately.

A linked list is a linear data structure that consists of a sequence of nodes, where each node contains a value and a reference to the next node in the sequence. Unlike arrays, linked lists do not have a fixed size and can dynamically grow or shrink as elements are added or removed.

The key advantage of a linked list over an array is its flexibility in terms of memory allocation. Each node in a linked list can be individually allocated, allowing for efficient use of memory. Additionally, linked lists can easily insert or remove elements at any position without the need for shifting elements.

Here is a Java implementation of a singly linked list:

```public class LinkedList<T> {

Node<T> newNode = new Node<>(data);

} else {

while (current.getNext() != null) {
current = current.getNext();
}

current.setNext(newNode);
}
}

public void remove(T data) {
return;
}

return;
}

Node<T> previous = null;

while (current != null) {
if (current.getData().equals(data)) {
previous.setNext(current.getNext());
return;
}

previous = current;
current = current.getNext();
}
}

public void display() {

while (current != null) {
System.out.print(current.getData() + " ");
current = current.getNext();
}

System.out.println();
}

private static class Node<T> {
private T data;
private Node<T> next;

public Node(T data) {
this.data = data;
}

public T getData() {
return data;
}

public Node<T> getNext() {
return next;
}

public void setNext(Node<T> next) {
this.next = next;
}
}

public static void main(String[] args) {

list.display(); // Output: 1 2 3

list.remove(2);

list.display(); // Output: 1 3
}
}
```

In this example, we have implemented a singly linked list to store integers. The `LinkedList` class provides methods for adding elements to the end of the list, removing elements, and displaying the contents of the list.

Each element in the linked list is represented by a `Node` object, which contains the data and a reference to the next node in the list. The `head` variable refers to the first node in the list. New elements are added at the end of the list by traversing to the last node and updating its `next` reference.

Linked lists have a time complexity of O(n) for insertion and removal operations, as the entire list may need to be traversed. However, they have a constant time complexity of O(1) for accessing elements at the head of the list. Linked lists are useful in scenarios where dynamic resizing or frequent insertions and removals are required.

## Stack

A stack is a linear data structure that follows the Last-In-First-Out (LIFO) principle, where the last element added is the first one to be removed. It supports two main operations: push, which adds an element to the top of the stack, and pop, which removes the topmost element from the stack.

The key idea behind a stack is that it only allows access to the most recently added element, which is called the top of the stack. All other elements are inaccessible until they are popped off the stack. This makes a stack a useful data structure for keeping track of function calls, undo operations, or balancing parentheses.

Here is a Java implementation of a stack using an array:

```public class Stack<T> {
private Object[] elements;
private int top;

public Stack(int capacity) {
elements = new Object[capacity];
top = -1;
}

public void push(T item) {
if (top == elements.length - 1) {
throw new StackOverflowError("Stack is full");
}

elements[++top] = item;
}

public T pop() {
if (top == -1) {
throw new IllegalStateException("Stack is empty");
}

return (T) elements[top--];
}

public T peek() {
if (top == -1) {
throw new IllegalStateException("Stack is empty");
}

return (T) elements[top];
}

public boolean isEmpty() {
}

public int size() {
}

public static void main(String[] args) {
Stack<String> stack = new Stack<>(5);

stack.push("apple");
stack.push("banana");
stack.push("cherry");

System.out.println(stack.peek()); // Output: cherry

stack.pop();

System.out.println(stack.size()); // Output: 2
}
}
```

In this example, we have implemented a stack using an array. The `Stack` class provides methods for pushing elements onto the stack, popping elements off the stack, peeking at the top element without removing it, checking if the stack is empty, and getting the size of the stack.

The stack is implemented using an array `elements` and an integer `top` to keep track of the index of the top element. The `push` method increments `top` and inserts the item at the corresponding index. The `pop` method returns and decrements `top`. The `peek` method returns the element at the top index without modifying `top`.

Stacks have a time complexity of O(1) for push, pop, peek, isEmpty, and size operations. They are commonly used in parsing, expression evaluation, and backtracking algorithms.

## Queue

A queue is a linear data structure that follows the First-In-First-Out (FIFO) principle, where the first element added is the first one to be removed. It supports two main operations: enqueue, which adds an element to the back of the queue, and dequeue, which removes the frontmost element from the queue.

The key idea behind a queue is that it allows access to elements in the order they were added, with elements added earlier being accessed before elements added later. This makes a queue a useful data structure for modeling real-world scenarios such as waiting lines, job scheduling, and breadth-first search.

Here is a Java implementation of a queue using an array:

```public class Queue<T> {
private Object[] elements;
private int front;
private int rear;

public Queue(int capacity) {
elements = new Object[capacity];
front = 0;
rear = -1;
}

public void enqueue(T item) {
if (rear == elements.length - 1) {
throw new IllegalStateException("Queue is full");
}

elements[++rear] = item;
}

public T dequeue() {
if (isEmpty()) {
throw new IllegalStateException("Queue is empty");
}

T item = (T) elements[front];
elements[front++] = null;
return item;
}

public T peek() {
if (isEmpty()) {
throw new IllegalStateException("Queue is empty");
}

return (T) elements[front];
}

public boolean isEmpty() {
return front > rear;
}

public int size() {
return rear - front + 1;
}

public static void main(String[] args) {
Queue<String> queue = new Queue<>(5);

queue.enqueue("apple");
queue.enqueue("banana");
queue.enqueue("cherry");

System.out.println(queue.peek()); // Output: apple

queue.dequeue();

System.out.println(queue.size()); // Output: 2
}
}
```

In this example, we have implemented a queue using an array. The `Queue` class provides methods for enqueueing elements at the rear of the queue, dequeuing elements from the front of the queue, peeking at the front element without removing it, checking if the queue is empty, and getting the size of the queue.

The queue is implemented using an array `elements` and two integers `front` and `rear` to keep track of the indices of the front and rear elements, respectively. The `enqueue` method increments `rear` and inserts the item at the corresponding index. The `dequeue` method returns and increments `front`. The `peek` method returns the element at the front index without modifying `front`.

Queues have a time complexity of O(1) for enqueue, dequeue, peek, isEmpty, and size operations. They are commonly used in multi-threading, scheduling, and simulation algorithms.

## Heap

A heap is a complete binary tree-based data structure that satisfies the heap property. The heap property states that for a max heap, the value of each node is greater than or equal to the values of its children, and for a min heap, the value of each node is less than or equal to the values of its children.

The key idea behind a heap is to efficiently maintain the maximum or minimum element in a collection. This makes a heap a useful data structure for implementing priority queues, sorting algorithms, and graph algorithms like Dijkstra’s algorithm.

Here is a Java implementation of a max heap:

```import java.util.Arrays;

public class MaxHeap {
private int[] heap;
private int size;

public MaxHeap(int capacity) {
heap = new int[capacity];
size = 0;
}

public void insert(int item) {
if (size == heap.length) {
throw new IllegalStateException("Heap is full");
}

heap[size++] = item;
siftUp(size - 1);
}

public int deleteMax() {
if (isEmpty()) {
throw new IllegalStateException("Heap is empty");
}

int max = heap[0];
heap[0] = heap[--size];
heap[size] = 0;
siftDown(0);

return max;
}

public boolean isEmpty() {
return size == 0;
}

private void siftUp(int index) {
while (index > 0 && heap[index] > heap[parent(index)]) {
swap(index, parent(index));
index = parent(index);
}
}

private void siftDown(int index) {
while (index < size && !isLeaf(index)) {
int leftChild = leftChild(index);
int rightChild = rightChild(index);
int maxChild = leftChild;

if (rightChild < size && heap[rightChild] > heap[leftChild]) {
maxChild = rightChild;
}

if (heap[index] >= heap[maxChild]) {
break;
}

swap(index, maxChild);
index = maxChild;
}
}

private int parent(int index) {
return (index - 1) / 2;
}

private int leftChild(int index) {
return 2 * index + 1;
}

private int rightChild(int index) {
return 2 * index + 2;
}

private boolean isLeaf(int index) {
return index >= size / 2 && index < size;
}

private void swap(int i, int j) {
int temp = heap[i];
heap[i] = heap[j];
heap[j] = temp;
}

public static void main(String[] args) {
MaxHeap maxHeap = new MaxHeap(10);

maxHeap.insert(5);
maxHeap.insert(3);
maxHeap.insert(8);

System.out.println(maxHeap.deleteMax()); // Output: 8
}
}
```

In this example, we have implemented a max heap using an array. The `MaxHeap` class provides methods for inserting elements into the heap, deleting the max element from the heap, and checking if the heap is empty.

The heap is implemented using an array `heap` and an integer `size` to keep track of the number of elements. The `insert` method appends the item to the end of the array and sifts it up to its correct position. The `deleteMax` method swaps the root element with the last element, removes the last element, and sifts down the root to its correct position.

Heaps have a time complexity of O(log n) for insert and deleteMax operations, where n is the number of elements in the heap. They are commonly used in sorting algorithms like heap sort and priority queues.

## Tree

A tree is a widely used abstract data type that represents a hierarchical structure. It consists of nodes connected by edges, where each node can have zero or more child nodes. The topmost node is called the root, and each node (except the root) has a parent node.

The key idea behind a tree is to represent relationships between elements in a hierarchical manner. Trees are used to model hierarchical structures like file systems, organization charts, and decision trees. They are also used in algorithms like binary search trees and AVL trees.

Here is a Java implementation of a binary tree:

```public class BinaryTree<T> {
private Node<T> root;

public void insert(T data) {
Node<T> newNode = new Node<>(data);

if (root == null) {
root = newNode;
} else {
insertNode(root, newNode);
}
}

private void insertNode(Node<T> currentNode, Node<T> newNode) {
if (newNode.getData() == null) {
return;
}

if (newNode.getData().compareTo(currentNode.getData()) < 0) {
if (currentNode.getLeft() == null) {
currentNode.setLeft(newNode);
} else {
insertNode(currentNode.getLeft(), newNode);
}
} else {
if (currentNode.getRight() == null) {
currentNode.setRight(newNode);
} else {
insertNode(currentNode.getRight(), newNode);
}
}
}

public void display() {
displayNode(root);
}

private void displayNode(Node<T> currentNode) {
if (currentNode == null) {
return;
}

displayNode(currentNode.getLeft());

System.out.print(currentNode.getData() + " ");

displayNode(currentNode.getRight());
}

private static class Node<T> {
private T data;
private Node<T> left;
private Node<T> right;

public Node(T data) {
this.data = data;
}

public T getData() {
return data;
}

public Node<T> getLeft() {
return left;
}

public void setLeft(Node<T> left) {
this.left = left;
}

public Node<T> getRight() {
return right;
}

public void setRight(Node<T> right) {
this.right = right;
}
}

public static void main(String[] args) {
BinaryTree<Integer> tree = new BinaryTree<>();

tree.insert(5);
tree.insert(3);
tree.insert(8);

tree.display(); // Output: 3 5 8
}
}
```

In this example, we have implemented a binary tree to store integers. The `BinaryTree` class provides a method for inserting elements into the tree and a method for displaying the contents of the tree.

Each element in the binary tree is represented by a `Node` object, which contains the data and references to the left and right child nodes. The `insert` method compares the data of the new node with the data of the current node and recursively inserts the new node into the left or right subtree. The `display` method performs an in-order traversal of the tree, printing the data of each node in ascending order.

Trees have a time complexity of O(n) for insertion and display operations, where n is the number of nodes in the tree. They are commonly used in various algorithms and data structures like binary search trees, AVL trees, and expression trees.

## Graph

A graph is a non-linear data structure that consists of a set of vertices (also called nodes) and a set of edges that connect pairs of vertices. It is used to represent relationships between objects in a network. Graphs can be directed (edges have a direction) or undirected (edges have no direction).

The key idea behind a graph is to model relationships between entities. Graphs are used to represent social networks, road networks, dependency relationships, and many other real-world scenarios. They are also used in algorithms like depth-first search and breadth-first search.

Here is a Java implementation of an undirected graph using an adjacency list:

```import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class Graph<T> {

public Graph() {
}

}

public void addEdge(T source, T destination) {
throw new IllegalArgumentException("One or both vertices do not exist");
}

}

public void removeVertex(T vertex) {
throw new IllegalArgumentException("Vertex does not exist");
}

for (List<T> neighbors : adjacencyList.values()) {
neighbors.remove(vertex);
}
}

public void removeEdge(T source, T destination) {
throw new IllegalArgumentException("One or both vertices do not exist");
}

}

public List<T> getNeighbors(T vertex) {
throw new IllegalArgumentException("Vertex does not exist");
}

}

public boolean hasVertex(T vertex) {
}

public boolean hasEdge(T source, T destination) {
throw new IllegalArgumentException("One or both vertices do not exist");
}

}

public static void main(String[] args) {
Graph<String> graph = new Graph<>();

System.out.println(graph.getNeighbors("B")); // Output: [A, C]

graph.removeEdge("A", "B");

System.out.println(graph.hasEdge("A", "B")); // Output: false
}
}
```

In this example, we have implemented an undirected graph using an adjacency list. The `Graph` class provides methods for adding vertices, adding edges between vertices, removing vertices, removing edges, and checking for the presence of vertices and edges.

The graph is implemented using a `HashMap` where the keys are the vertices and the values are lists of neighboring vertices. The `addVertex` method adds a vertex to the graph, initializing an empty list of neighbors. The `addEdge` method adds an edge between two vertices by adding each vertex to the other’s list of neighbors. The `removeVertex` method removes a vertex from the graph and removes it from the list of neighbors of all other vertices. The `removeEdge` method removes an edge between two vertices by removing each vertex from the other’s list of neighbors. The `getNeighbors` method returns the list of neighbors of a given vertex. The `hasVertex` method checks if a vertex exists in the graph. The `hasEdge` method checks if an edge exists between two vertices.

Graphs have a time complexity of O(1) for adding and removing vertices, and O(n) for adding and removing edges and getting neighbors, where n is the number of vertices in the graph. They are commonly used in various algorithms and data structures like depth-first search, breadth-first search, and shortest path algorithms.

## Dynamic Programming

Dynamic programming is a problem-solving technique used to solve problems by breaking them down into smaller overlapping subproblems and solving each subproblem only once. It is especially useful for optimization problems where the solution can be expressed as a recursive relationship between subproblems.

The key idea behind dynamic programming is to store the solutions to subproblems in a table or memoization array and reuse them when needed. This avoids redundant computations and improves the efficiency of the algorithm. Dynamic programming is often used to solve problems that can be characterized by the principle of optimality, where an optimal solution can be constructed from optimal solutions to subproblems.

Here is an example of dynamic programming using the Fibonacci sequence:

```public class Fibonacci {
private static long[] memo;

public static long fibonacci(int n) {
if (n <= 1) {
return n;
}

if (memo[n] != 0) {
return memo[n];
}

long result = fibonacci(n - 1) + fibonacci(n - 2);
memo[n] = result;

return result;
}

public static void main(String[] args) {
int n = 10;
memo = new long[n + 1];

long result = fibonacci(n);

System.out.println("Fibonacci(" + n + ") = " + result);
}
}
```

In this example, we have implemented the Fibonacci sequence using dynamic programming. The `fibonacci` method takes an integer `n` as input and returns the nth Fibonacci number. It uses memoization to store the solutions to subproblems in the `memo` array. If the solution to a subproblem is already computed, it is retrieved from the `memo` array. Otherwise, the solution is computed recursively and stored in the `memo` array for future use.

Dynamic programming can significantly improve the efficiency of algorithms by avoiding redundant computations. It is commonly used to solve problems in various domains, including optimization, scheduling, and graph algorithms.

A better way to build and deploy Web Apps

Cloud Dev Environments
Test/QA enviroments
Staging

One-click preview environments for each branch of code.

The time complexity of binary search is O(log n), where n is the number of elements in the sorted array. Binary search achieves this time complexity by repeatedly dividing the search space in half until the target element is found or determined to be not present.

The key idea behind the time complexity of binary search is the logarithmic nature of the divide and conquer approach. Each step of the binary search cuts the search space in half, reducing the number of remaining elements to consider. This results in a logarithmic time complexity, as the number of steps required to find the target element grows logarithmically with the number of elements in the array.

Here is an example to demonstrate the time complexity of binary search:

```public class BinarySearchTime {
public static int binarySearch(int[] arr, int target) {
int left = 0;
int right = arr.length - 1;

while (left <= right) {
int mid = left + (right - left) / 2;

if (arr[mid] == target) {
return mid;
} else if (arr[mid] < target) {
left = mid + 1;
} else {
right = mid - 1;
}
}

return -1;
}

public static void main(String[] args) {
int[] arr = new int[1000000];

for (int i = 0; i < arr.length; i++) {
arr[i] = i;
}

int target = 999999;

long startTime = System.nanoTime();
int result = binarySearch(arr, target);
long endTime = System.nanoTime();

long elapsedTime = endTime - startTime;

System.out.println("Binary search time: " + elapsedTime + " nanoseconds");
}
}
```

In this example, we have an array `arr` containing one million elements in ascending order. We are searching for the target element 999999 using the `binarySearch` method. We measure the execution time of the binary search using the `System.nanoTime` method.

The time complexity of binary search allows for efficient searching in large sorted arrays. It is widely used in various applications, including searching, sorting, and optimization algorithms. Understanding the time complexity of binary search is essential for designing efficient algorithms and selecting appropriate data structures for different scenarios.

## How Merge Sort Works

Merge sort is a popular sorting algorithm that follows a divide and conquer approach. It involves dividing the input array into smaller subarrays, sorting them recursively, and then merging the sorted subarrays to produce the final sorted array.

The key idea behind merge sort is to repeatedly divide the array in half until each subarray contains only one element. Then, the merging process takes place, where the sorted subarrays are combined to produce a larger sorted array. This process is repeated until the entire array is sorted.

Here is an example to demonstrate how merge sort works:

```public class MergeSortDemo {
public static void mergeSort(int[] arr) {
if (arr.length <= 1) {
return;
}

int mid = arr.length / 2;
int[] left = new int[mid];
int[] right = new int[arr.length - mid];

for (int i = 0; i < mid; i++) {
left[i] = arr[i];
}

for (int i = mid; i < arr.length; i++) {
right[i - mid] = arr[i];
}

mergeSort(left);
mergeSort(right);
merge(arr, left, right);
}

private static void merge(int[] arr, int[] left, int[] right) {
int i = 0;
int j = 0;
int k = 0;

while (i < left.length && j < right.length) {
if (left[i] <= right[j]) {
arr[k++] = left[i++];
} else {
arr[k++] = right[j++];
}
}

while (i < left.length) {
arr[k++] = left[i++];
}

while (j < right.length) {
arr[k++] = right[j++];
}
}

public static void main(String[] args) {
int[] arr = {5, 2, 9, 1, 7, 6, 3};

mergeSort(arr);

System.out.println("Sorted array: " + Arrays.toString(arr));
}
}
```

In this example, we have an unsorted array `arr` containing integers. We are sorting the array using the `mergeSort` method. The `mergeSort` method recursively divides the array into smaller subarrays and sorts them. The `merge` method merges the sorted subarrays to produce the final sorted array.

The time complexity of merge sort is O(n log n), where n is the number of elements in the array. It is a stable sorting algorithm, meaning that the relative order of equal elements is preserved. This makes it useful in many scenarios, such as sorting linked lists or objects based on multiple keys.

Understanding how merge sort works is essential for designing efficient sorting algorithms and analyzing the time complexity of different sorting algorithms.

## What is a Hash Table

A hash table, also known as a hash map, is a data structure that allows for efficient insertion, deletion, and retrieval of key-value pairs. It uses a hash function to compute an index into an array of buckets, where each bucket can hold multiple key-value pairs.

The key idea behind a hash table is to store the key-value pairs in an array, where the index of each pair is determined by applying a hash function to the key. The hash function converts the key into an integer, which is then used as the array index. This allows for constant-time access to the value associated with a given key.

Here is an example to demonstrate how a hash table works:

```import java.util.HashMap;
import java.util.Map;

public class HashTableDemo {
public static void main(String[] args) {
Map<String, Integer> scores = new HashMap<>();

scores.put("Alice", 90);
scores.put("Bob", 85);
scores.put("Charlie", 95);

int aliceScore = scores.get("Alice");

System.out.println("Alice's score: " + aliceScore); // Output: Alice's score: 90

scores.remove("Bob");

boolean containsCharlie = scores.containsKey("Charlie");

System.out.println("Contains Charlie: " + containsCharlie); // Output: Contains Charlie: true

int size = scores.size();

System.out.println("Size of hash table: " + size); // Output: Size of hash table: 2
}
}
```

In this example, we have created a hash table `scores` using the `HashMap` class. We have added key-value pairs to the hash table using the `put` method, and retrieved values associated with keys using the `get` method. We have also removed a key-value pair using the `remove` method and checked if a key exists using the `containsKey` method. Finally, we have determined the number of key-value pairs in the hash table using the `size` method.

Hash tables have an average case time complexity of O(1) for insertion, deletion, and retrieval operations. However, in the worst case, when there are many collisions, the time complexity can degrade to O(n), where n is the number of key-value pairs in the hash table. Therefore, it is important to choose a good hash function and handle collisions appropriately.

Understanding what a hash table is and how it works is essential for designing efficient data structures and algorithms that require fast insertion, deletion, and retrieval of key-value pairs.

A linked list is a linear data structure that consists of a sequence of nodes, where each node contains a value and a reference to the next node in the sequence. Unlike arrays, linked lists do not have a fixed size and can dynamically grow or shrink as elements are added or removed.

Here is an example of implementing a singly linked list in Java:

```public class LinkedList<T> {

Node<T> newNode = new Node<>(data);

} else {

while (current.getNext() != null) {
current = current.getNext();
}

current.setNext(newNode);
}
}

public void remove(T data) {
return;
}

return;
}

Node<T> previous = null;

while (current != null) {
if (current.getData().equals(data)) {
previous.setNext(current.getNext());
return;
}

previous = current;
current = current.getNext();
}
}

public void display() {

while (current != null) {
System.out.print(current.getData() + " ");
current = current.getNext();
}

System.out.println();
}

private static class Node<T> {
private T data;
private Node<T> next;

public Node(T data) {
this.data = data;
}

public T getData() {
return data;
}

public Node<T> getNext() {
return next;
}

public void setNext(Node<T> next) {
this.next = next;
}
}

public static void main(String[] args) {

list.display(); // Output: 1 2 3

list.remove(2);

list.display(); // Output: 1 3
}
}
```

In this example, we have implemented a singly linked list to store integers. The `LinkedList` class provides methods for adding elements to the end of the list, removing elements, and displaying the contents of the list.

Each element in the linked list is represented by a `Node` object, which contains the data and a reference to the next node in the list. The `head` variable refers to the first node in the list. New elements are added at the end of the list by traversing to the last node and updating its `next` reference.

Linked lists have a time complexity of O(n) for insertion and removal operations, as the entire list may need to be traversed. However, they have a constant time complexity of O(1) for accessing elements at the head of the list. Linked lists are useful in scenarios where dynamic resizing or frequent insertions and removals are required.

## Difference Between a Stack and a Queue

Both stacks and queues are linear data structures that store elements in a specific order. However, there are key differences between the two in terms of their order of insertion and removal, as well as the operations they support.

Stack:
– Follows the Last-In-First-Out (LIFO) principle, where the last element added is the first one to be removed.
– Supports two main operations: push, which adds an element to the top of the stack, and pop, which removes the topmost element from the stack.
– Additional operations include peek, which returns the top element without removing it, and isEmpty, which checks if the stack is empty.
– Can be implemented using an array or a linked list.

Queue:
– Follows the First-In-First-Out (FIFO) principle, where the first element added is the first one to be removed.
– Supports two main operations: enqueue, which adds an element to the back of the queue, and dequeue, which removes the frontmost element from the queue.
– Additional operations include peek, which returns the front element without removing it, and isEmpty, which checks if the queue is empty.
– Can be implemented using an array or a linked list.

Here is an example to demonstrate the difference between a stack and a queue:

```import java.util.Stack;
import java.util.Queue;

public class StackAndQueueDemo {
public static void main(String[] args) {
Stack<Integer> stack = new Stack<>();

// Stack operations
stack.push(1);
stack.push(2);
stack.push(3);

System.out.println(stack.pop()); // Output: 3

// Queue operations

System.out.println(queue.remove()); // Output: 1
}
}
```

In this example, we have used the built-in `Stack` class and the `LinkedList` class to demonstrate the difference between a stack and a queue. We have performed stack operations using the `push` and `pop` methods, and queue operations using the `add` and `remove` methods.

Understanding the difference between a stack and a queue is important for choosing the appropriate data structure for different scenarios. Stacks are commonly used in scenarios where the order of insertion and removal is important, such as function calls and undo operations. Queues are commonly used in scenarios where the order of insertion and removal follows a first-in-first-out pattern, such as job scheduling and breadth-first search algorithms.

## Introduction to Heap Data Structure

A heap is a complete binary tree-based data structure that satisfies the heap property. The heap property states that for a max heap, the value of each node is greater than or equal to the values of its children, and for a min heap, the value of each node is less than or equal to the values of its children.

The key idea behind a heap is to efficiently maintain the maximum or minimum element in a collection. This makes a heap a useful data structure for implementing priority queues, sorting algorithms, and graph algorithms like Dijkstra’s algorithm.

Heaps can be implemented using arrays or linked lists. In the array-based implementation, the elements are stored in an array, where the parent-child relationships are determined by the indices of the elements. In the linked list-based implementation, each node contains a value and references to its left and right child nodes.

Heaps support the following main operations:
– Insertion: Adds an element to the heap while maintaining the heap property.
– Deletion: Removes the maximum or minimum element from the heap while maintaining the heap property.
– Peek: Returns the maximum or minimum element without modifying the heap.
– Heapify: Rearranges the elements of an array into a heap.
– Heap sort: Sorts an array using the heap data structure.

Here is an example to demonstrate the heap data structure and its operations using the array-based implementation:

```import java.util.Arrays;

public class HeapDemo {
private int[] heap;
private int size;

public HeapDemo(int capacity) {
heap = new int[capacity];
size = 0;
}

public void insert(int item) {
if (size == heap.length) {
throw new IllegalStateException("Heap is full");
}

heap[size++] = item;
siftUp(size - 1);
}

public int deleteMax() {
if (isEmpty()) {
throw new IllegalStateException("Heap is empty");
}

int max = heap[0];
heap[0] = heap[--size];
heap[size] = 0;
siftDown(0);

return max;
}

public boolean isEmpty() {
return size == 0;
}

private void siftUp(int index) {
while (index > 0 && heap[index] > heap[parent(index)]) {
swap(index, parent(index));
index = parent(index);
}
}

private void siftDown(int index) {
while (index < size && !isLeaf(index)) {
int leftChild = leftChild(index);
int rightChild = rightChild(index);
int maxChild = leftChild;

if (rightChild < size && heap[rightChild] > heap[leftChild]) {
maxChild = rightChild;
}

if (heap[index] >= heap[maxChild]) {
break;
}

swap(index, maxChild);
index = maxChild;
}
}

private int parent(int index) {
return (index - 1) / 2;
}

private int leftChild(int index) {
return 2 * index + 1;
}

private int rightChild(int index) {
return 2 * index + 2;
}

private boolean isLeaf(int index) {
return index >= size / 2 && index < size;
}

private void swap(int i, int j) {
int temp = heap[i];
heap[i] = heap[j];
heap[j] = temp;
}

public static void main(String[] args) {
HeapDemo maxHeap = new HeapDemo(10);

maxHeap.insert(5);
maxHeap.insert(3);
maxHeap.insert(8);

System.out.println(maxHeap.deleteMax()); // Output: 8

System.out.println(Arrays.toString(maxHeap.heap)); // Output: [5, 3, 0, 0, 0, 0, 0, 0, 0, 0]
}
}
```

In this example, we have implemented a max heap using an array. The `HeapDemo` class provides methods for inserting elements into the heap, deleting the maximum element from the heap, and checking if the heap is empty.

The heap is implemented using an array `heap` and an integer `size` to keep track of the number of elements. The `insert` method appends the item to the end of the array and sifts it up to its correct position. The `deleteMax` method swaps the root element with the last element, removes the last element, and sifts down the root to its correct position.

Heaps have a time complexity of O(log n) for insert and deleteMax operations, where n is the number of elements in the heap. They are commonly used in sorting algorithms like heap sort and priority queues.

## Types of Trees in Java

In Java, there are various types of trees that can be implemented to represent hierarchical structures or to solve specific problems. Some common types of trees in Java include:

1. Binary Tree: A binary tree is a tree data structure where each node has at most two children. It is one of the most commonly used tree structures and is used in various algorithms and applications.

2. Binary Search Tree: A binary search tree is a binary tree where the value of each node is greater than all the values in its left subtree and less than all the values in its right subtree. This property allows for efficient searching, insertion, and deletion of elements.

3. AVL Tree: An AVL tree is a self-balancing binary search tree where the heights of the left and right subtrees of any node differ by at most one. This ensures that the tree remains balanced and provides efficient operations even in the worst-case scenarios.

4. Red-Black Tree: A red-black tree is another type of self-balancing binary search tree where each node is colored either red or black. The tree is balanced using a set of color and rotation rules, which ensure that the tree remains balanced and provides efficient operations.

5. Trie: A trie, also known as a prefix tree, is a tree-like data structure used for efficient retrieval of keys in a large set of strings. Each node in the trie represents a prefix or a complete word, and the edges represent the characters of the keys.

6. B-Tree: A B-tree is a self-balancing search tree that can have more than two children per node. It is commonly used in file systems and databases to store large amounts of data efficiently.

7. Heap: A heap is a complete binary tree-based data structure that satisfies the heap property. It is commonly used to maintain the maximum or minimum element in a collection and is used in priority queues, sorting algorithms, and graph algorithms.

These are just a few examples of the types of trees that can be implemented in Java. Each type of tree has its own set of properties and use cases. Understanding the different types of trees and their characteristics is essential for selecting the appropriate tree structure for a given problem or application.

## Graph Data Structure

A graph is a non-linear data structure that consists of a set of vertices (also called nodes) and a set of edges that connect pairs of vertices. It is used to represent relationships between objects in a network. Graphs can be directed (edges have a direction) or undirected (edges have no direction).

The key idea behind a graph is to model relationships between entities. Graphs are used to represent social networks, road networks, dependency relationships, and many other real-world scenarios. They are also used in algorithms like depth-first search and breadth-first search.

Graphs can be implemented using various data structures, including adjacency matrix, adjacency list, and edge list. The choice of data structure depends on the specific requirements of the application, such as the number of vertices, the number of edges, and the type of operations to be performed.

Some common operations and algorithms associated with graphs include:

– Adding and removing vertices and edges
– Checking if a graph is connected or disconnected
– Finding the shortest path between two vertices using algorithms like Dijkstra’s algorithm
– Performing graph traversal using depth-first search or breadth-first search
– Finding cycles in a graph using algorithms like Tarjan’s algorithm or Kosaraju’s algorithm

Here is an example to demonstrate the graph data structure using an adjacency list implementation:

```import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class GraphDemo {

public GraphDemo() {
}

}

public void addEdge(int source, int destination) {
throw new IllegalArgumentException("One or both vertices do not exist");
}

}

public void removeVertex(int vertex) {
throw new IllegalArgumentException("Vertex does not exist");
}

for (List<Integer> neighbors : adjacencyList.values()) {
neighbors.remove((Integer) vertex);
}
}

public void removeEdge(int source, int destination) {
throw new IllegalArgumentException("One or both vertices do not exist");
}

}

public List<Integer> getNeighbors(int vertex) {
throw new IllegalArgumentException("Vertex does not exist");
}

}

public boolean hasVertex(int vertex) {
}

public boolean hasEdge(int source, int destination) {
throw new IllegalArgumentException("One or both vertices do not exist");
}

}

public static void main(String[] args) {
GraphDemo graph = new GraphDemo();

System.out.println(graph.getNeighbors(2)); // Output: [1, 3]

graph.removeEdge(1, 2);

System.out.println(graph.hasEdge(1, 2)); // Output: false
}
}
```

In this example, we have implemented a graph using an adjacency list. The `GraphDemo` class provides methods for adding vertices, adding edges between vertices, removing vertices, removing edges, and checking for the presence of vertices and edges.

The graph is implemented using a `HashMap` where the keys are the vertices and the values are lists of neighboring vertices. The `addVertex` method adds a vertex to the graph, initializing an empty list of neighbors. The `addEdge` method adds an edge between two vertices by adding each vertex to the other’s list of neighbors. The `removeVertex` method removes a vertex from the graph and removes it from the list of neighbors of all other vertices. The `removeEdge` method removes an edge between two vertices by removing each vertex from the other’s list of neighbors. The `getNeighbors` method returns the list of neighbors of a given vertex. The `hasVertex` method checks if a vertex exists in the graph. The `hasEdge` method checks if an edge exists between two vertices.

Graphs have a time complexity of O(1) for adding and removing vertices, and O(n) for adding and removing edges and getting neighbors, where n is the number of vertices in the graph. They are commonly used in various algorithms and data structures like depth-first search, breadth-first search, and shortest path algorithms.

## Introduction to Dynamic Programming

Dynamic programming is a problem-solving technique used to solve problems by breaking them down into smaller overlapping subproblems and solving each subproblem only once. It is especially useful for optimization problems where the solution can be expressed as a recursive relationship between subproblems.

The key idea behind dynamic programming is to store the solutions to subproblems in a table or memoization array and reuse them when needed. This avoids redundant computations and improves the efficiency of the algorithm. Dynamic programming is often used to solve problems that can be characterized by the principle of optimality, where an optimal solution can be constructed from optimal solutions to subproblems.

Dynamic programming can be applied to a wide range of problems, including but not limited to:

– Fibonacci sequence
– Knapsack problem
– Longest common subsequence
– Matrix chain multiplication
– Edit distance
– Shortest path algorithms (e.g., Dijkstra’s algorithm)

Here is an example to demonstrate dynamic programming using the Fibonacci sequence:

```public class FibonacciDemo {
private static long[] memo;

public static long fibonacci(int n) {
if (n <= 1) {
return n;
}

if (memo[n] != 0) {
return memo[n];
}

long result = fibonacci(n - 1) + fibonacci(n - 2);
memo[n] = result;

return result;
}

public static void main(String[] args) {
int n = 10;
memo = new long[n + 1];

long result = fibonacci(n);

System.out.println("Fibonacci(" + n + ") = " + result);
}
}
```

In this example, we have implemented the Fibonacci sequence using dynamic programming. The `fibonacci` method takes an integer `n` as input and returns the nth Fibonacci number. It uses memoization to store the solutions to subproblems in the `memo` array. If the solution to a subproblem is already computed, it is retrieved from the `memo` array. Otherwise, the solution is computed recursively and stored in the `memo` array for future use.

Dynamic programming can significantly improve the efficiency of algorithms by avoiding redundant computations. It is commonly used to solve problems in various domains, including optimization, scheduling, and graph algorithms.

## Implementing Binary Search Tree in Java

A binary search tree (BST) is a binary tree data structure where each node has a key that is greater than all the keys in its left subtree and less than all the keys in its right subtree. It allows for efficient insertion, deletion, and retrieval of elements.

Here is an example of implementing a binary search tree in Java:

```public class BinarySearchTree<T extends Comparable<T>> {
private Node<T> root;

public void insert(T key) {
root = insertNode(root, key);
}

private Node<T> insertNode(Node<T> currentNode, T key) {
if (currentNode == null) {
return new Node<>(key);
}

if (key.compareTo(currentNode.getKey()) < 0) {
currentNode.setLeft(insertNode(currentNode.getLeft(), key));
} else {
currentNode.setRight(insertNode(currentNode.getRight(), key));
}

return currentNode;
}

public void remove(T key) {
root = removeNode(root, key);
}

private Node<T> removeNode(Node<T> currentNode, T key) {
if (currentNode == null) {
return null;
}

if (key.compareTo(currentNode.getKey()) < 0) {
currentNode.setLeft(removeNode(currentNode.getLeft(), key));
} else if (key.compareTo(currentNode.getKey()) > 0) {
currentNode.setRight(removeNode(currentNode.getRight(), key));
} else {
if (currentNode.getLeft() == null && currentNode.getRight() == null) {
return null;
} else if (currentNode.getLeft() == null) {
return currentNode.getRight();
} else if (currentNode.getRight() == null) {
return currentNode.getLeft();
} else {
T minValue = findMinValue(currentNode.getRight());
currentNode.setKey(minValue);
currentNode.setRight(removeNode(currentNode.getRight(), minValue));
}
}

return currentNode;
}

private T findMinValue(Node<T> currentNode) {
T minValue = currentNode.getKey();

while (currentNode.getLeft() != null) {
minValue = currentNode.getLeft().getKey();
currentNode = currentNode.getLeft();
}

return minValue;
}

public boolean contains(T key) {
return containsNode(root, key);
}

private boolean containsNode(Node<T> currentNode, T key) {
if (currentNode == null) {
return false;
}

if (key.compareTo(currentNode.getKey()) == 0) {
return true;
} else if (key.compareTo(currentNode.getKey()) < 0) {
return containsNode(currentNode.getLeft(), key);
} else {
return containsNode(currentNode.getRight(), key);
}
}

public void display() {
displayInOrder(root);
}

private void displayInOrder(Node<T> currentNode) {
if (currentNode == null) {
return;
}

displayInOrder(currentNode.getLeft());
System.out.print(currentNode.getKey() + " ");
displayInOrder(currentNode.getRight());
}

private static class Node<T> {
private T key;
private Node<T> left;
private Node<T> right;

public Node(T key) {
this.key = key;
}

public T getKey() {
return key;
}

public void setKey(T key) {
this.key = key;
}

public Node<T> getLeft() {
return left;
}

public void setLeft(Node<T> left) {
this.left = left;
}

public Node<T> getRight() {
return right;
}

public void setRight(Node<T> right) {
this.right = right;
}
}

public static void main(String[] args) {
BinarySearchTree<Integer> bst = new BinarySearchTree<>();

bst.insert(5);
bst.insert(3);
bst.insert(8);

bst.display(); // Output: 3 5 8

bst.remove(3);

bst.display(); // Output: 5 8

System.out.println(bst.contains(5)); // Output: true
}
}
```

In this example, we have implemented a binary search tree (BST) to store integers. The `BinarySearchTree` class provides methods for inserting elements into the tree, removing elements from the tree, checking if an element is present in the tree, and displaying the contents of the tree.

Each element in the BST is represented by a `Node` object, which contains the key and references to the left and right child nodes. The `insert` method inserts a key into the tree by traversing to the appropriate position based on the key’s value. The `remove` method removes a key from the tree by recursively finding the key and updating the references of the nodes. The `contains` method checks if a key is present in the tree by recursively searching for the key. The `display` method performs an in-order traversal of the tree, printing the keys in ascending order.

Binary search trees have a time complexity of O(log n) for insertion, removal, and search operations on average, where n is the number of elements in the tree. However, in the worst case, when the tree is unbalanced, the time complexity can degrade to O(n). Balancing techniques like AVL trees and red-black trees can be used to maintain a balanced tree and provide efficient operations in all scenarios.

Implementing a binary search tree allows for efficient insertion, deletion, and retrieval of elements in a sorted order, making it useful in various applications such as searching