Skip to content

ashishgopalhattimare/Parallel-Concurrent-and-Distributed-Programming-in-Java

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

8 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Parallel-Concurrent-and-Distributed-Programming-in-Java

Parallel Concurrent and Distributed Programming in Java | Coursera Certification


LEGENDS LABELLING
โœ”๏ธ - The topics covered during the course
โœ… - Self-done assignment
โ˜‘ - Instructor assistence required


Parallel Programming in Java

Week 1 : Task Parallelism

โœ”๏ธ Demonstrate task parallelism using Asynkc/Finish constructs
โœ”๏ธ Create task-parallel programs using Java's Fork/Join Framework
โœ”๏ธ Interpret Computation Graph abstraction for task-parallel programs
โœ”๏ธ Evaluate the Multiprocessor Scheduling problem using Computation Graphs
โœ”๏ธ Assess sequetional bottlenecks using Amdahl's Law

โœ… Mini project 1 : Reciproncal-Array-Sum using the Java Fork/Join Framework

Week 2 : Functional Parallelism

โœ”๏ธ Demonstrate functional parallelism using the Future construct
โœ”๏ธ Create functional-parallel programs using Java's Fork/Join Framework
โœ”๏ธ Apply the princple of memoization to optimize functional parallelism
โœ”๏ธ Create functional-parallel programs using Java Streams
โœ”๏ธ Explain the concepts of data races and functional/structural determinism

โœ… Mini project 2 : Analysing Student Statistics Using Java Parallel Streams

Week 3 : Loop Parallelism

โœ”๏ธ Create programs with loop-level parallelism using the Forall and Java Stream constructs
โœ”๏ธ Evaluate loop-level parallelism in a matrix-multiplication example
โœ”๏ธ Examine the barrier construct for parallel loops
โœ”๏ธ Evaluate parallel loops with barriers in an iterative-averaging example
โœ”๏ธ Apply the concept of iteration grouping/chunking to improve the performance of parallel loops

โœ… Mini project 3 : Parallelizing Matrix-Matrix Multiply Using Loop Parallelism

Week 4 : Data flow Synchronization and Pipelining

โœ”๏ธ Create split-phase barriers using Java's Phaser construct
โœ”๏ธ Create point-to-point synchronization patterns using Java's Phaser construct
โœ”๏ธ Evaluate parallel loops with point-to-point synchronization in an iterative-averaging example
โœ”๏ธ Analyze pipeline parallelism using the principles of point-to-point synchronization
โœ”๏ธ Interpret data flow parallelism using the data-driven-task construct

โ˜‘ Mini project 4 : Using Phasers to Optimize Data-Parallel Applications


Concurrent Programming in Java

Week 1 : Threads and Locks

โœ”๏ธ Understand the role of Java threads in building concurrent programs
โœ”๏ธ Create concurrent programs using Java threads and the synchronized statement (structured locks)
โœ”๏ธ Create concurrent programs using Java threads and lock primitives in the java.util.concurrent library (unstructured locks)
โœ”๏ธ Analyze programs with threads and locks to identify liveness and related concurrency bugs
โœ”๏ธ Evaluate different approaches to solving the classical Dining Philosophers Problem

โœ… Mini project 1 : Locking and Synchronization

Week 2 : Critical Sections and Isolation

โœ”๏ธ Create concurrent programs with critical sections to coordinate accesses to shared resources
โœ”๏ธ Create concurrent programs with object-based isolation to coordinate accesses to shared resources with more overlap than critical sections
โœ”๏ธ Evaluate different approaches to implementing the Concurrent Spanning Tree algorithm
โœ”๏ธ Create concurrent programs using Java's atomic variables
โœ”๏ธ Evaluate the impact of read vs. write operations on concurrent accesses to shared resources

โœ… Mini project 2 : Global and Object-Based Isolation

Week 3 : Actors

โœ”๏ธ Understand the Actor model for building concurrent programs
โœ”๏ธ Create simple concurrent programs using the Actor model
โœ”๏ธ Analyze an Actor-based implementation of the Sieve of Eratosthenes program
โœ”๏ธ Create Actor-based implementations of the Producer-Consumer pattern
โœ”๏ธ Create Actor-based implementations of concurrent accesses on a bounded resource

โœ… Mini project 3 : Sieve of Eratosthenes Using Actor Parallelism

Week 4 : Concurrent Data Structures

โœ”๏ธ Understand the principle of optimistic concurrency in concurrent algorithms
โœ”๏ธ Understand implementation of concurrent queues based on optimistic concurrency
โœ”๏ธ Understand linearizability as a correctness condition for concurrent data structures
โœ”๏ธ Create concurrent Java programs that use the java.util.concurrent.ConcurrentHashMap library
โœ”๏ธ Analyze a concurrent algorithm for computing a Minimum Spanning Tree of an undirected graph

โ˜‘ Mini project 4 : Parallelization of Boruvka's Minimum Spanning Tree Algorithm


Distributed Programming in Java

Week 1 : Distributed Map Reduce

โœ”๏ธ Explain the MapReduce paradigm for analyzing data represented as key-value pairs
โœ”๏ธ Apply the MapReduce paradigm to programs written using the Apache Hadoop framework
โœ”๏ธ Create Map Reduce programs using the Apache Spark framework
โœ”๏ธ Acknowledge the TF-IDF statistic used in data mining, and how it can be computed using the MapReduce paradigm
โœ”๏ธ Create an implementation of the PageRank algorithm using the Apache Spark framework

โ˜‘ Mini project 1 : Page Rank with Spark

Week 2 : Client-Server Programming

โœ”๏ธ Generate distributed client-server applications using sockets
โœ”๏ธ Demonstrate different approaches to serialization and deserialization of data structures for distributed programming
โœ”๏ธ Recall the use of remote method invocations as a higher-level primitive for distributed programming (compared to sockets)
โœ”๏ธ Evaluate the use of multicast sockets as a generalization of sockets
โœ”๏ธ Employ distributed publish-subscribe applications using the Apache Kafka framework

โœ… Mini project 2 : Filer Server

Week 3 : Message Passing

โœ”๏ธ Create distributed applications using the Single Program Multiple Data (SPMD) model
โœ”๏ธ Create message-passing programs using point-to-point communication primitives in MPI
โœ”๏ธ Identify message ordering and deadlock properties of MPI programs
โœ”๏ธ Evaluate the advantages of non-blocking communication relative to standard blocking communication primitives
โœ”๏ธ Explain collective communication as a generalization of point-to-point communication

โ˜‘ Mini project 3 : Matrix Multiply in MPI

Week 4 : Combining Distribution and Multuthreading

โœ”๏ธ Distinguish processes and threads as basic building blocks of parallel, concurrent, and distributed Java programs
โœ”๏ธ Create multithreaded servers in Java using threads and processes
โœ”๏ธ Demonstrate how multithreading can be combined with message-passing programming models like MPI
โœ”๏ธ Analyze how the actor model can be used for distributed programming
โœ”๏ธ Assess how the reactive programming model can be used for distrubted programming

โœ… Mini project 4 : Multi-Threaded File Server