Problem Solving Through Programming in C

In this lesson, we are going to learn Problem Solving Through Programming in C. This is the first lesson while we start learning the C language.

Table of Contents

Introduction to Problem Solving Through Programming in C

Regardless of the area of the study, computer science is all about solving problems with computers. The problem that we want to solve can come from any real-world problem or perhaps even from the abstract world. We need to have a standard systematic approach to problem solving through programming in c.

computer programmers are problem solvers. In order to solve a problem on a computer, we must know how to represent the information describing the problem and determine the steps to transform the information from one representation into another.

A computer is a very powerful and versatile machine capable of performing a multitude of different tasks, yet it has no intelligence or thinking power.

The computer cannot solve the problem on its own, one has to provide step by step solutions of the problem to the computer. In fact, the task of problem-solving is not that of the computer.

It is the programmer who has to write down the solution to the problem in terms of simple operations which the computer can understand and execute.

In order to solve a problem with the computer, one has to pass through certain stages or steps. They are as follows:

Steps to Solve a Problem With the Computer

Step 1: understanding the problem:.

Here we try to understand the problem to be solved in totally. Before with the next stage or step, we should be absolutely sure about the objectives of the given problem.

Step 2: Analyzing the Problem:

The idea here is to search for an appropriate solution to the problem under consideration. The end result of this stage is a broad overview of the sequence of operations that are to be carried out to solve the given problem.

Step 3: Developing the solution:

Here, the overview of the sequence of operations that was the result of the analysis stage is expanded to form a detailed step by step solution to the problem under consideration.

Step 4: Coding and Implementation:

The vehicle for the computer solution to a problem is a set of explicit and unambiguous instructions expressed in a programming language. This set of instruction is called a program with problem solving through programming in C .

A program may also be thought of as an algorithm expressed in a programming language. an algorithm, therefore, corresponds to a solution to a problem that is independent of any programming language .

The problem solving is a skill and there are no universal approaches one can take to solving problems. Basically one must explore possible avenues to a solution one by one until she/he comes across the right path to a solution.

In general, as one gains experience in solving problems, one develops one’s own techniques and strategies, though they are often intangible. Problem-solving skills are recognized as an integral component of computer programming.

Problem Solving Steps

Problem-solving is a creative process which defines systematization and mechanization. There are a number of steps that can be taken to raise the level of one’s performance in problem-solving.

A problem-solving technique follows certain steps in finding the solution to a problem. Let us look into the steps one by one:

1. Problem Definition Phase:

In the problem definition phase, we must emphasize what must be done rather than how is it to be done. That is, we try to extract the precisely defined set of tasks from the problem statement.

Inexperienced problem solvers too often gallop ahead with the task of the problem – solving only to find that they are either solving the wrong problem or solving the wrong problem or solving just one particular problem.

2. Getting Started on a Problem:

Sometimes you do not have any idea where to begin solving a problem, even if the problem has been defined. Such block sometimes occurs because you are overly concerned with the details of the implementation even before you have completely understood or worked out a solution.

The best advice is not to get concerned with the details. Those can come later when the intricacies of the problem have been understood.

3. Use of Specific Examples:

It is usually much easier to work out the details of a solution to a specific problem because the relationship between the mechanism and the problem is more clearly defined.

This approach of focusing on a particular problem can give us the foothold we need for making a start on the solution to the general problem.

4. Similarities Among Problems:

The more experience one has the more tools and techniques one can bring to bear in tackling the given problem. But sometimes, it blocks us from discovering a desirable or better solution to the problem.

A skill that is important to try to develop in problem-solving is the ability to view a problem from a variety of angles.

5. Working Backwards from the Solution:

In some cases, we can assume that we already have the solution to the problem and then try to work backwards to the starting point. Even a guess at the solution to the problem may be enough to give us a foothold to start on the problem.

We can systematize the investigations and avoid duplicate efforts by writing down the various steps taken and explorations made.

General Problem Solving Strategies:

There are a number of general and powerful computational strategies that are repeatedly used in various guises in computer science.

Often it is possible to phrase a problem in terms of one of these strategies and achieve considerable gains in computational efficiency.

1. Divide and Conquer:

The Splitting can be carried on further so that eventually we have many sub-problems, so small that further splitting is no necessary to solve them. We shall see many examples of this strategy and discuss the gain in efficiency due to its application.

2. Binary Doubling:

This is the reverse of the divide and conquers strategy i.e build-up the solution for a larger problem from solutions and smaller sub-problems.

3. Dynamic Programming:

The travelling salesman problem falls into this category. The idea here is that a good or optimal solution to a problem can be built-up from good or optimal solutions of the sub-problems.

4. General Search, Back Tracking and Branch-and-Bound:

All of these are variants of the basic dynamic programming strategy but are equally important.

Share This Story, Choose Your Platform!

Related posts, what is preprocessor in c, what is file handling in c, structures and unions in c.

Simple2Code

Algorithm in C Language 4 min read

In this tutorial, we will learn about the algorithms in C language with examples and practices. Let us start by understanding algorithms.

What is Algorithm?

An algorithm is a set of well-defined instructions to solve a particular problem. It is a step-by-step procedure to execute some instructions in a certain order to get the required output. Software Engineer commonly uses an algorithm for planning and solving problems.

Following are the characteristics of an Algorithm:

algorithm in C

  • Definiteness : Each instruction must be clear and unambiguous.
  • Input : An algorithm may have 0(zero) inputs or may have well-defined inputs.
  • Output : Each algorithm is expected to produce one or more than one desired output.
  • Finiteness : once the algorithm is executed, the algorithm must terminate after a finite number of steps.
  • Language Independent: The direction of steps in an algorithm should be independent of any programming language.
  • Feasible : The algorithm should be simple and generic so that it is feasible with any available resources.

How to write algorithm in C or for any other languages.

As told earlier, algorithm is independent of any programming language, so there is no well-defined method to write an algorithm. It depends on your problem and resources.

However, you may need to have a basic understanding on writing algorithms. Follow the steps below.

  • Define your algorithm’s input.
  • Define the variables.
  • Outline the algorithm’s operations.
  • Output the results of your algorithm’s operations.

There are some common types of control structures that are used while writing an algorithm such as Branching (if-else statements), loops (do-while, for, while), or in a sequence structure that is execution taking place starting from up to down.

Advantages of Algorithm

  • It is easy to understand because of its step-wise representation.
  • As it is independent of programming language so the person with no programming language can understand it.
  • The procedural step helps the problem to break down into smaller problems, making it easier for the programmer to understand and code.

Example of algorithms

Problem 1: write an algorithm to add two numbers and display the result.

Step 1 : START Step 2 : Declare integer variables num1, num2, and result. Step 3 : Read values num1 and num2. Step 4 : add the value and store it to the result variable. result←num1+num2 Step 5 : print result Step 6 : STOP

Problem 2: write an algorithm to calculate the factorial of a number and print the result.

Step 1 : START Step 2 : Declare num, fact, and i. Step 3 : Initialize fact=1 and i=1 Step 4 : Read n Step 5 : if i <= num go to step 6 else go to step 8 Step 6: Calculate fact = fact * i Step 7 : i++ and go to step 5 Step 8 : Print fact Step 9 : STOP

Analysis of Algorithm

There are two ways to analyze an algorithm that is before implementation and after implementation. They are:

1. Priori Analysis: The meaning of Priori is before , hence indicating that this analysis is done before the implementation of an algorithm. It is a theoretical analysis step. This Efficiency of an algorithm is measured by assuming that all other factors, for example, processor speed, are constant and have no effect on the implementation. The algorithm designer is responsible for this analysis.

2. Posterior Analysis : Here Posterior means after, hence indicating that this analysis is done after the implementation of an algorithm. Here the checking of an algorithm is done by implementing it in any programming language. The programmer checks the statistics like running time, correctness, space required, etc.

Complexity of an algorithm

The time and space are the two main factors that decides the complexity of the algorithm.

Time factor : This time factor of an algorithm is calculated by counting the number of key operations such as comparisons in the sorting algorithm.

Space Factor : This space factor of an algorithm is calculated by counting the maximum memory space required by the algorithm.

Space Complexity

Space complexity refers to the amount of memory that an algorithm uses in its life cycle to get the result. The calculation of space complexity is done on the basis of the following two components.

  • Fixed Part:  This space is required by the algorithm to store input variables, output variables, program size, etc. those size are independent of the problem
  • Variable Part:  This is the space required to store variables that are dependent on the size of the problem. For example, temporary variables, dynamic memory allocation, recursion stack space, etc.

Time Complexity

Time complexity refers to the amount of time that an algorithm required in its life cycle for the completion of the task. This can be for the operations, conditional if-else statements, etc. The calculation of space complexity is done on the basis of the following two components.

  • Constant time part:  Instruction that is executed only once such as input, output, if-else, switch, etc.
  • Variable Time Part:  Instruction that is executed more than once such as loops, recursion, etc.

Java Program to find the sum of the Largest Forward Diagonal

C program to search an element in an array using pointers, c program to find the sum of the digits of a number using recursion function, c program to find factorial of a number using ternary operator with recursion, c program to add two numbers using call by reference, find the output ab, cd, ef, g for the input a,b,c,d,e,f,g in javascript and python.

An algorithm is a step-by-step procedure or set of instructions for solving a specific problem or performing a particular task. It is a fundamental concept in computer science and programming and serves as a blueprint for designing and implementing solutions to various problems. Algorithms are used to solve a wide range of computational problems, from basic arithmetic calculations to complex data processing and analysis tasks.

Here's a more detailed explanation of algorithms in the context of programming using the C language:

An algorithm is a precise and systematic set of well-defined steps or instructions that, when executed in a specific order, accomplish a particular task or solve a problem. It is a high-level description of how to achieve a specific goal in a manner that is understandable by both humans and computers.

Well-Defined Steps: Each step in an algorithm must be unambiguous and clearly defined, leaving no room for interpretation.

Finite: Algorithms must have a finite number of steps or operations, meaning they must eventually terminate.

Input and Output: Algorithms take some input, process it according to the defined steps, and produce an output.

Deterministic: The behavior of an algorithm is deterministic, meaning the same input will always produce the same output when the algorithm is executed.

Effective: Algorithms are designed to be practical and efficient in solving problems.

Input: The data or information that the algorithm needs to process in order to produce the desired output.

Output: The result or solution generated by the algorithm after processing the input.

Control Structures: The flow of execution is controlled using conditional statements (if, else) and loops (for, while) to make decisions and repeat steps as needed.

Operations: A sequence of well-defined operations, calculations, or actions that transform the input data into the desired output.

Termination: The algorithm must eventually reach an end or termination point after completing the required steps.

Let's consider an example algorithm for calculating the factorial of a non-negative integer n using the iterative approach:

  • Algorithm: Iterative Factorial Calculation
  • Input: A non-negative integer n
  • Output: The factorial of n
  • Initialize result to 1
  • For i from 1 to n:
  •       Multiply result by i
  • Output result
  • #include <stdio.h>
  • int main() {
  •       int n, i;
  •       unsigned long long factorial = 1; // To handle larger values
  •       printf("Enter a non-negative integer: ");
  •       scanf("%d", &n);
  •       for (i = 1; i <= n; ++i) {
  •             factorial *= i;
  •       }
  •       printf("Factorial of %d is %llu\n", n, factorial);
  •       return 0;

This C code demonstrates an iterative algorithm for calculating the factorial of a non-negative integer. It initializes a result variable to 1 and uses a loop to multiply the result by each integer from 1 to n. The final result is then displayed.

In summary, an algorithm is a set of clear, finite, and ordered steps that guide the solution of a specific problem. It is essential for both human understanding and computer implementation, forming the foundation of programming and problem-solving in the world of computer science.

  • 1. Micro-Worlds
  • 2. Light-Bot in Java
  • 3. Jeroos of Santong Island
  • 4. Problem Solving and Algorithms
  • 5. Creating Jeroo Methods
  • 6. Conditionally Executing Actions
  • 7. Repeating Actions
  • 8. Handling Touch Events
  • 9. Adding Text to the Screen

Problem Solving and Algorithms

Learn a basic process for developing a solution to a problem. Nothing in this chapter is unique to using a computer to solve a problem. This process can be used to solve a wide variety of problems, including ones that have nothing to do with computers.

Problems, Solutions, and Tools

I have a problem! I need to thank Aunt Kay for the birthday present she sent me. I could send a thank you note through the mail. I could call her on the telephone. I could send her an email message. I could drive to her house and thank her in person. In fact, there are many ways I could thank her, but that's not the point. The point is that I must decide how I want to solve the problem, and use the appropriate tool to implement (carry out) my plan. The postal service, the telephone, the internet, and my automobile are tools that I can use, but none of these actually solves my problem. In a similar way, a computer does not solve problems, it's just a tool that I can use to implement my plan for solving the problem.

Knowing that Aunt Kay appreciates creative and unusual things, I have decided to hire a singing messenger to deliver my thanks. In this context, the messenger is a tool, but one that needs instructions from me. I have to tell the messenger where Aunt Kay lives, what time I would like the message to be delivered, and what lyrics I want sung. A computer program is similar to my instructions to the messenger.

The story of Aunt Kay uses a familiar context to set the stage for a useful point of view concerning computers and computer programs. The following list summarizes the key aspects of this point of view.

A computer is a tool that can be used to implement a plan for solving a problem.

A computer program is a set of instructions for a computer. These instructions describe the steps that the computer must follow to implement a plan.

An algorithm is a plan for solving a problem.

A person must design an algorithm.

A person must translate an algorithm into a computer program.

This point of view sets the stage for a process that we will use to develop solutions to Jeroo problems. The basic process is important because it can be used to solve a wide variety of problems, including ones where the solution will be written in some other programming language.

An Algorithm Development Process

Every problem solution starts with a plan. That plan is called an algorithm.

There are many ways to write an algorithm. Some are very informal, some are quite formal and mathematical in nature, and some are quite graphical. The instructions for connecting a DVD player to a television are an algorithm. A mathematical formula such as πR 2 is a special case of an algorithm. The form is not particularly important as long as it provides a good way to describe and check the logic of the plan.

The development of an algorithm (a plan) is a key step in solving a problem. Once we have an algorithm, we can translate it into a computer program in some programming language. Our algorithm development process consists of five major steps.

Step 1: Obtain a description of the problem.

Step 2: analyze the problem., step 3: develop a high-level algorithm., step 4: refine the algorithm by adding more detail., step 5: review the algorithm..

This step is much more difficult than it appears. In the following discussion, the word client refers to someone who wants to find a solution to a problem, and the word developer refers to someone who finds a way to solve the problem. The developer must create an algorithm that will solve the client's problem.

The client is responsible for creating a description of the problem, but this is often the weakest part of the process. It's quite common for a problem description to suffer from one or more of the following types of defects: (1) the description relies on unstated assumptions, (2) the description is ambiguous, (3) the description is incomplete, or (4) the description has internal contradictions. These defects are seldom due to carelessness by the client. Instead, they are due to the fact that natural languages (English, French, Korean, etc.) are rather imprecise. Part of the developer's responsibility is to identify defects in the description of a problem, and to work with the client to remedy those defects.

The purpose of this step is to determine both the starting and ending points for solving the problem. This process is analogous to a mathematician determining what is given and what must be proven. A good problem description makes it easier to perform this step.

When determining the starting point, we should start by seeking answers to the following questions:

What data are available?

Where is that data?

What formulas pertain to the problem?

What rules exist for working with the data?

What relationships exist among the data values?

When determining the ending point, we need to describe the characteristics of a solution. In other words, how will we know when we're done? Asking the following questions often helps to determine the ending point.

What new facts will we have?

What items will have changed?

What changes will have been made to those items?

What things will no longer exist?

An algorithm is a plan for solving a problem, but plans come in several levels of detail. It's usually better to start with a high-level algorithm that includes the major part of a solution, but leaves the details until later. We can use an everyday example to demonstrate a high-level algorithm.

Problem: I need a send a birthday card to my brother, Mark.

Analysis: I don't have a card. I prefer to buy a card rather than make one myself.

High-level algorithm:

Go to a store that sells greeting cards Select a card Purchase a card Mail the card

This algorithm is satisfactory for daily use, but it lacks details that would have to be added were a computer to carry out the solution. These details include answers to questions such as the following.

"Which store will I visit?"

"How will I get there: walk, drive, ride my bicycle, take the bus?"

"What kind of card does Mark like: humorous, sentimental, risqué?"

These kinds of details are considered in the next step of our process.

A high-level algorithm shows the major steps that need to be followed to solve a problem. Now we need to add details to these steps, but how much detail should we add? Unfortunately, the answer to this question depends on the situation. We have to consider who (or what) is going to implement the algorithm and how much that person (or thing) already knows how to do. If someone is going to purchase Mark's birthday card on my behalf, my instructions have to be adapted to whether or not that person is familiar with the stores in the community and how well the purchaser known my brother's taste in greeting cards.

When our goal is to develop algorithms that will lead to computer programs, we need to consider the capabilities of the computer and provide enough detail so that someone else could use our algorithm to write a computer program that follows the steps in our algorithm. As with the birthday card problem, we need to adjust the level of detail to match the ability of the programmer. When in doubt, or when you are learning, it is better to have too much detail than to have too little.

Most of our examples will move from a high-level to a detailed algorithm in a single step, but this is not always reasonable. For larger, more complex problems, it is common to go through this process several times, developing intermediate level algorithms as we go. Each time, we add more detail to the previous algorithm, stopping when we see no benefit to further refinement. This technique of gradually working from a high-level to a detailed algorithm is often called stepwise refinement .

The final step is to review the algorithm. What are we looking for? First, we need to work through the algorithm step by step to determine whether or not it will solve the original problem. Once we are satisfied that the algorithm does provide a solution to the problem, we start to look for other things. The following questions are typical of ones that should be asked whenever we review an algorithm. Asking these questions and seeking their answers is a good way to develop skills that can be applied to the next problem.

Does this algorithm solve a very specific problem or does it solve a more general problem ? If it solves a very specific problem, should it be generalized?

For example, an algorithm that computes the area of a circle having radius 5.2 meters (formula π*5.2 2 ) solves a very specific problem, but an algorithm that computes the area of any circle (formula π*R 2 ) solves a more general problem.

Can this algorithm be simplified ?

One formula for computing the perimeter of a rectangle is:

length + width + length + width

A simpler formula would be:

2.0 * ( length + width )

Is this solution similar to the solution to another problem? How are they alike? How are they different?

For example, consider the following two formulae:

Rectangle area = length * width Triangle area = 0.5 * base * height

Similarities: Each computes an area. Each multiplies two measurements.

Differences: Different measurements are used. The triangle formula contains 0.5.

Hypothesis: Perhaps every area formula involves multiplying two measurements.

Example 4.1: Pick and Plant

This section contains an extended example that demonstrates the algorithm development process. To complete the algorithm, we need to know that every Jeroo can hop forward, turn left and right, pick a flower from its current location, and plant a flower at its current location.

Problem Statement (Step 1)

A Jeroo starts at (0, 0) facing East with no flowers in its pouch. There is a flower at location (3, 0). Write a program that directs the Jeroo to pick the flower and plant it at location (3, 2). After planting the flower, the Jeroo should hop one space East and stop. There are no other nets, flowers, or Jeroos on the island.

StartFinish

Analysis of the Problem (Step 2)

The flower is exactly three spaces ahead of the jeroo.

The flower is to be planted exactly two spaces South of its current location.

The Jeroo is to finish facing East one space East of the planted flower.

There are no nets to worry about.

High-level Algorithm (Step 3)

Let's name the Jeroo Bobby. Bobby should do the following:

Get the flower Put the flower Hop East

Detailed Algorithm (Step 4)

Get the flower Hop 3 times Pick the flower Put the flower Turn right Hop 2 times Plant a flower Hop East Turn left Hop once

Review the Algorithm (Step 5)

The high-level algorithm partitioned the problem into three rather easy subproblems. This seems like a good technique.

This algorithm solves a very specific problem because the Jeroo and the flower are in very specific locations.

This algorithm is actually a solution to a slightly more general problem in which the Jeroo starts anywhere, and the flower is 3 spaces directly ahead of the Jeroo.

Java Code for "Pick and Plant"

A good programmer doesn't write a program all at once. Instead, the programmer will write and test the program in a series of builds. Each build adds to the previous one. The high-level algorithm will guide us in this process.

FIRST BUILD

To see this solution in action, create a new Greenfoot4Sofia scenario and use the Edit Palettes Jeroo menu command to make the Jeroo classes visible. Right-click on the Island class and create a new subclass with the name of your choice. This subclass will hold your new code.

The recommended first build contains three things:

The main method (here myProgram() in your island subclass).

Declaration and instantiation of every Jeroo that will be used.

The high-level algorithm in the form of comments.

The instantiation at the beginning of myProgram() places bobby at (0, 0), facing East, with no flowers.

Once the first build is working correctly, we can proceed to the others. In this case, each build will correspond to one step in the high-level algorithm. It may seem like a lot of work to use four builds for such a simple program, but doing so helps establish habits that will become invaluable as the programs become more complex.

SECOND BUILD

This build adds the logic to "get the flower", which in the detailed algorithm (step 4 above) consists of hopping 3 times and then picking the flower. The new code is indicated by comments that wouldn't appear in the original (they are just here to call attention to the additions). The blank lines help show the organization of the logic.

By taking a moment to run the work so far, you can confirm whether or not this step in the planned algorithm works as expected.

THIRD BUILD

This build adds the logic to "put the flower". New code is indicated by the comments that are provided here to mark the additions.

FOURTH BUILD (final)

Example 4.2: replace net with flower.

This section contains a second example that demonstrates the algorithm development process.

There are two Jeroos. One Jeroo starts at (0, 0) facing North with one flower in its pouch. The second starts at (0, 2) facing East with one flower in its pouch. There is a net at location (3, 2). Write a program that directs the first Jeroo to give its flower to the second one. After receiving the flower, the second Jeroo must disable the net, and plant a flower in its place. After planting the flower, the Jeroo must turn and face South. There are no other nets, flowers, or Jeroos on the island.

Jeroo_2 is exactly two spaces behind Jeroo_1.

The only net is exactly three spaces ahead of Jeroo_2.

Each Jeroo has exactly one flower.

Jeroo_2 will have two flowers after receiving one from Jeroo_1. One flower must be used to disable the net. The other flower must be planted at the location of the net, i.e. (3, 2).

Jeroo_1 will finish at (0, 1) facing South.

Jeroo_2 is to finish at (3, 2) facing South.

Each Jeroo will finish with 0 flowers in its pouch. One flower was used to disable the net, and the other was planted.

Let's name the first Jeroo Ann and the second one Andy.

Ann should do the following: Find Andy (but don't collide with him) Give a flower to Andy (he will be straight ahead) After receiving the flower, Andy should do the following: Find the net (but don't hop onto it) Disable the net Plant a flower at the location of the net Face South
Ann should do the following: Find Andy Turn around (either left or right twice) Hop (to location (0, 1)) Give a flower to Andy Give ahead Now Andy should do the following: Find the net Hop twice (to location (2, 2)) Disable the net Toss Plant a flower at the location of the net Hop (to location (3, 2)) Plant a flower Face South Turn right

The high-level algorithm helps manage the details.

This algorithm solves a very specific problem, but the specific locations are not important. The only thing that is important is the starting location of the Jeroos relative to one another and the location of the net relative to the second Jeroo's location and direction.

Java Code for "Replace Net with Flower"

As before, the code should be written incrementally as a series of builds. Four builds will be suitable for this problem. As usual, the first build will contain the main method, the declaration and instantiation of the Jeroo objects, and the high-level algorithm in the form of comments. The second build will have Ann give her flower to Andy. The third build will have Andy locate and disable the net. In the final build, Andy will place the flower and turn East.

This build creates the main method, instantiates the Jeroos, and outlines the high-level algorithm. In this example, the main method would be myProgram() contained within a subclass of Island .

This build adds the logic for Ann to locate Andy and give him a flower.

This build adds the logic for Andy to locate and disable the net.

This build adds the logic for Andy to place a flower at (3, 2) and turn South.

Learn Python practically and Get Certified .

Popular Tutorials

Popular examples, reference materials, learn python interactively, dsa introduction.

  • What is an algorithm?
  • Data Structure and Types
  • Why learn DSA?
  • Asymptotic Notations
  • Master Theorem
  • Divide and Conquer Algorithm

Data Structures (I)

  • Types of Queue
  • Circular Queue
  • Priority Queue

Data Structures (II)

  • Linked List
  • Linked List Operations
  • Types of Linked List
  • Heap Data Structure
  • Fibonacci Heap
  • Decrease Key and Delete Node Operations on a Fibonacci Heap

Tree based DSA (I)

  • Tree Data Structure
  • Tree Traversal
  • Binary Tree
  • Full Binary Tree
  • Perfect Binary Tree
  • Complete Binary Tree
  • Balanced Binary Tree
  • Binary Search Tree

Tree based DSA (II)

  • Insertion in a B-tree
  • Deletion from a B-tree
  • Insertion on a B+ Tree
  • Deletion from a B+ Tree
  • Red-Black Tree
  • Red-Black Tree Insertion
  • Red-Black Tree Deletion

Graph based DSA

  • Graph Data Structure
  • Spanning Tree
  • Strongly Connected Components
  • Adjacency Matrix
  • Adjacency List
  • DFS Algorithm
  • Breadth-first Search

Bellman Ford's Algorithm

Sorting and Searching Algorithms

  • Bubble Sort
  • Selection Sort
  • Insertion Sort
  • Counting Sort
  • Bucket Sort
  • Linear Search
  • Binary Search

Greedy Algorithms

Greedy Algorithm

  • Ford-Fulkerson Algorithm
  • Dijkstra's Algorithm
  • Kruskal's Algorithm
  • Prim's Algorithm
  • Huffman Coding

Dynamic Programming

  • Floyd-Warshall Algorithm
  • Longest Common Sequence

Other Algorithms

  • Backtracking Algorithm
  • Rabin-Karp Algorithm

DSA Tutorials

Merge Sort Algorithm

  • Selection Sort Algorithm

What is an Algorithm?

In computer programming terms, an algorithm is a set of well-defined instructions to solve a particular problem. It takes a set of input(s) and produces the desired output. For example,

An algorithm to add two numbers:

Take two number inputs

Add numbers using the + operator

Display the result

Qualities of a Good Algorithm

  • Input and output should be defined precisely.
  • Each step in the algorithm should be clear and unambiguous.
  • Algorithms should be most effective among many different ways to solve a problem.
  • An algorithm shouldn't include computer code. Instead, the algorithm should be written in such a way that it can be used in different programming languages.

Algorithm Examples

Algorithm to add two numbers

Algorithm to find the largest among three numbers

Algorithm to find all the roots of the quadratic equation

Algorithm to find the factorial

Algorithm to check prime number

Algorithm of Fibonacci series

Algorithm 1: Add two numbers entered by the user

Algorithm 2: find the largest number among three numbers, algorithm 3: find roots of a quadratic equation ax 2 + bx + c = 0, algorithm 4: find the factorial of a number, algorithm 5: check whether a number is prime or not, algorithm 6: find the fibonacci series till the term less than 1000, table of contents.

  • Qualities of Good Algorithms
  • Add Two Numbers
  • Find Largest Number
  • Roots of Quatratic Equation
  • Find Factorial of a Number
  • Check Prime Number
  • Find Fibonacci Series

Sorry about that.

Related Tutorials

DS & Algorithms

Home » C Algorithms

C Algorithms

This tutorial series show you how to implement the most common algorithms in C including sorting and searching.

Section 1. Sorting

  • Bubble Sort – show you how to implement a bubble sort in C.
  • Selection Sort – learn about how the selection sort works.
  • Heapsort – explain to you the heap sort algorithm.
  • Quicksort – guide you on the quick sort algorithm.

Section 2. Searching

  • Binary Search – learn about the binary search and how to implement it in C.

swayam-logo

Problem Solving Through Programming In C

  • Formulate simple algorithms for arithmetic and logical problems
  • Translate the algorithms to programs (in C language)
  • Test and execute the programs and  correct syntax and logical errors
  • Implement conditional branching, iteration and recursion
  • Decompose a problem into functions and synthesize a complete program using divide and conquer approach
  • Use arrays, pointers and structures to formulate algorithms and programs
  • Apply programming to solve matrix addition and multiplication problems and searching and sorting problems 
  • Apply programming to solve simple numerical method problems, namely rot finding of function, differentiation of function and simple integration
--> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> --> -->

Note: This exam date is subjected to change based on seat availability. You can check final exam date on your hall ticket.

Page Visits

Course layout, books and references, instructor bio.

algorithm for problem solving in c

Prof. Anupam Basu

Course certificate.

  • Assignment score = 25% of average of best 8 assignments out of the total 12 assignments given in the course. 
  • ( All assignments in a particular week will be counted towards final scoring - quizzes and programming assignments). 
  • Unproctored programming exam score = 25% of the average scores obtained as part of Unproctored programming exam - out of 100
  • Proctored Exam score =50% of the proctored certification exam score out of 100

algorithm for problem solving in c

DOWNLOAD APP

algorithm for problem solving in c

SWAYAM SUPPORT

Please choose the SWAYAM National Coordinator for support. * :

Ace your Coding Interview

  • DSA Problems
  • Binary Tree
  • Binary Search Tree
  • Dynamic Programming
  • Divide and Conquer
  • Linked List
  • Backtracking

Data Structures and Algorithms Problems

  • TopClassic, TopLiked ↗ Easy
  • TopLiked ↗ Medium
  • TopLiked ↗ Easy
  • TopClassic, TopLiked ↗ Medium
  • ↗ Medium
  • ↗ Hard
  • ↗ Easy
  • TopAlgo ↗ Easy
  • TopClassic ↗ Medium
  • TopAlgo, TopClassic, TopAlgo ↗ Easy
  • TopLiked ↗ Hard
  • TopClassic, TopLiked ↗ Hard
  • TopClassic ↗ Hard
  • TopClassic ↗ Easy
  • TopAlgo ↗ Medium
  • TopClassic Hard
  • ↗ Beginner
  • TopAlgo ↗ Hard
  • TopLiked Medium
  • TopClassic, TopLiked, TopDP ↗ Medium
  • TopLiked, TopDP ↗ Hard
  • TopClassic, TopLiked, TopDP ↗ Hard
  • TopDP ↗ Medium
  • TopAlgo Medium
  • TopClassic Medium
  • TopAlgo Hard

Rate this post

Average rating 4.88 /5. Vote count: 5923

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Tell us how we can improve this post?

Thanks for reading.

To share your code in the comments, please use our online compiler that supports C, C++, Java, Python, JavaScript, C#, PHP, and many more popular programming languages.

Like us? Refer us to your friends and support our growth. Happy coding :)

guest

Course Status : Completed
Course Type : Elective
Duration : 12 weeks
Category :
Credit Points : 3
Undergraduate/Postgraduate
Start Date : 25 Jul 2022
End Date : 14 Oct 2022
Enrollment Ends : 08 Aug 2022
Exam Date : 29 Oct 2022 IST

Javatpoint Logo

  • Design Pattern
  • Interview Q

C Control Statements

C functions, c dynamic memory, c structure union, c file handling, c preprocessor, c command line, c programming test, c interview.

JavaTpoint

An algorithm is a sequence of instructions that are carried out in a predetermined sequence in order to solve a problem or complete a work. A function is a block of code that can be called and executed from other parts of the program.

A set of instructions for resolving an issue or carrying out a certain activity. In computer science, algorithms are used for a wide range of operations, from fundamental math to intricate data processing.

One of the common algorithms used in C is the sorting algorithm. A sorting algorithm arranges a collection of items in a certain order, such as numerically or alphabetically.

There are many sorting algorithms, each with advantages and disadvantages. The most common sorting algorithms in C are quicksort, merge, and sort.

One of the key features of C is pointer support. This allows efficient manipulation of data structures such as arrays, queues etc. This makes it suitable for implementing algorithms that require complex data manipulation, such as sorting and algorithmic searching.

One of the famous examples of software library implemented in C is the Standard Template Library (STL). This library provides a wide variety of algorithms for tasks such as sorting, searching, and manipulating data structures.

It defines several important features of the algorithm, including:

: Algorithms must receive inputs that can be represented as values or data. : The algorithm should produce some output. It can be a consequence of a problem or a solution designed to solve it. : Algorithms must be precisely defined, using unambiguous instructions that a computer or other system can follow unambiguously. : The algorithm requires a limited steps. It means that it should be exited after executing a certain number of commands. : The algorithm must be valid. In other words, it should be able to produce a solution to the problem that the algorithm is designed to solve in a reasonable amount of time. An algorithm must be effective, meaning that it must be able to produce a solution to the problem it is designed to solve in a reasonable amount of time. An algorithm must be general, meaning that it can be applied to a wide range of problems rather than being specific to a single problem.

Algorithmic analysis is the process of evaluating algorithm performance in terms of efficiency, complexity and other criteria.Typically, this is done to evaluate many algorithms and select the optimal solution for a certain issue or a software.

Analysis of algorithms usually involves measuring their time and space complexity.

As with space complexity, which describes the amount of memory or disk space needed, time complexity describes how long an algorithm determines to perform a task.

There are different ways to analyze the time complexity of algorithms, such as Big O and Omega notation. The Omega symbol provides an upper bound for the algorithm's time complexity, while the Omega symbol provides a lower bound.

In addition to measuring time and space complexity, algorithm analysis also includes other criteria such as stability, parallelism, and scalability.

:- This refers to the ability of the algorithm to maintain the relative order of the elements in the data set. :- This is referring to the capacity to execute operations in parallel across several processors. :- On the other hand, it refers to the ability of an algorithm to handle large volumes of data and other inputs.

Prior is a method of algorithm analysis that focuses on estimating the performance of an algorithm based on its mathematical properties without actually executing the algorithm.

This approach allows you to analyze the time and space complexity of algorithms and other metrics without the need to implement and run the algorithms.

Posterior analysis, on the other hand, is a method of algorithm analysis that actually executes the algorithm and measures its performance.

This approach provides more accurate and detailed information about the performance of the algorithm, but requires the implementation and execution of the algorithm.

Algorithmic complexity is a measure to measure the efficiency and performance of the algorithm. Algorithms are usually evaluated in terms of the time and space required to solve a problem or achieve a specific goal.

Two factors are used in the complexity of the algorithm.

they are:-

For example, suppose we want to write an algorithm to find the maximum value from a list of numbers.

Algorithm written in pseudo code:

You can test the algorithm by entering different lists of numbers and verifying that it returns the maximum correct value. You can also analyze the time complexity of your algorithm to determine how well it scales for larger inputs.

Example:-

Input: [1, 5, 2, 7, 3]

Output: 7.

Explanation: 7 is the maximum value in the list.

Look for ways to optimize algorithms for making them faster and more efficient. This may involve modifying pseudocode or implementing more efficient data structures or algorithms.

Example: - The sum of two integers.

- Get started

- Declare three integers a, b, c

- Define the values of a and b

- Add the values of a and b

- Save the output of step 4 in c

- Print c

- Stop

C provides a rich set of data types and operators that can be used to implement various sorting algorithms such as bubble sort, insertion sort and quick sort.

These algorithms are useful in many applications because they can be used to sort data of different sizes and types.

An uncomplicated sorting algorithm that compares nearby components repeatedly and switches them out if they are out of order.

: a method of sorting that creates a sorted list one individual element at a time by placing each one in the appropriate spot.

: a method of sorting that consistently starts the sorted listing with the smallest detail from the unordered listing.

: A divide-and-conquer sorting algorithm that chooses a pivot element and splits the list into sublists depending on whether the elements are fewer than or more than the pivot. After that, the sublists are sorted repeatedly until the full list is sorted.

: The divide-and-conquer sort algorithm divides the list into two halves, sorts each half, and then merges the two halves in sorted order.

: A sorting algorithm that sorts elements using a data structure called heap.

: Starting with the first non-leaf node, compare each node with its child nodes and replace the nodes with the largest of its children to satisfy the max heap property. : Swap the root (largest element) with the last element in the stack. : Starting from the root node, it compares elements with its children and swaps with the larger of the two until the max heap property is satisfied. : Start with the last element in the heap, compare it to its parent, and swap it with the parent to satisfy the max heap property.

: A sorting algorithm that sorts elements based on the digits or digits of their binary representation.

C also provides the tools necessary to implement a variety of searching algorithms, such as linear search and binary search. These algorithms can quickly find specific items in a data set, making them useful for a wide range of applications.

There are many types of search algorithms.

They are:-

: A basic search algorithm that examines each item in the listing one by one until it finds the desired item.

: A search algorithm that operates by splitting the listing into halves and searches within of those halves is more likely to include the element.

: A search algorithm that examines each branch as far as is feasible before turning around.

: A search algorithm that explores all the neighbors of a node before moving to the next level.

: A search algorithm that uses the values of the searched elements to estimate the position in the index.

It is important that the array is evenly distributed. Otherwise, it is an algorithm.

It works as expected.

The algorithm can be summarized as follows.

: A search method that iterates over the list in constant-length steps until it finds the relevant element or determines that it is no longer present.

The jump search algorithm is as follows:

C's support for pointers and data structures such as arrays and linked lists makes it suitable for implementing algorithms that manipulate graphs, such as finding the shortest path between two nodes in a graph.

There are different types of graph algorithms.

they are:-

: An algorithm that finds the shortest path between two nodes in a graph by continuously updating the shortest distance from each node. : A method that continually updates the shortest course to each node in a graph to determine the shortest route between them. : An approach for figuring out the weighted connected graph's smallest spanning tree. : An approach to identify the linked weighted graph's lowest spanning tree. : An algorithm that, even when the graph has negative edge weights, displays the shortest path between a particular supply node and every other node in the network.

C supports low-level operations and efficient data manipulation, making it ideal for implementing algorithms used in cryptography, such as data encryption and decryption algorithms.

There are different types of encryption algorithms.

They are:-

: These algorithms produce fixed-size outputs (hash) from arbitrary-sized inputs. Examples include MD5, SHA-1 and SHA-2. : The encryption and decryption steps in such algorithms employ the same private key. AES, DES, and Blowfish are a few examples. : A public key and a non-public key are used by those methods as separate keys for encryption and decryption. Some examples include RSA, ECC, and DSA. : These algorithms allow two parties to exchange keys over an insecure channel securely. For example, we can mention Diffie-Hellman and Elliptic Curve Diffie-Hellman.

Algorithms have many advantages.

they are:-

: Algorithms can process large amounts of data quickly and accurately, making them useful for tasks that are too time-consuming or error-prone for people to perform. : Algorithms follow a set of predetermined guidelines. It can produce consistent results without being influenced by personal biases and emotions. : Algorithms can perform tasks automatically, leaving people free to focus on more complex or creative tasks. : Algorithms can often achieve higher levels of accuracy than humans, especially when dealing with large amounts of data. : Algorithms help us make more informed and objective decisions by analyzing data and identifying patterns and trends that are not easily visible to people. : Algorithms can be easily scaled up or down to meet changing demands and workloads.

Algorithms are very useful for programming, but algorithms have drawbacks.

they are:-

: Algorithms can only solve problems within their scope and may not be able to solve complex or abstract problems. : Algorithms can perpetuate and reinforce biases in the data used for training, leading to unfair results. : Many algorithms conceal the process through which they arrive at their conclusions. This could make it tough to think about or check the results. The correctness of the set of rules is heavily dependent on the fineness and applicability of the data utilised in instruction. Inaccurate or inaccurate effects may be the result of faulty data. Algorithms are designed to follow guidelines and won't adapt to changing circumstances and conditions.



Youtube

  • Send your Feedback to [email protected]

Help Others, Please Share

facebook

Learn Latest Tutorials

Splunk tutorial

Transact-SQL

Tumblr tutorial

Reinforcement Learning

R Programming tutorial

R Programming

RxJS tutorial

React Native

Python Design Patterns

Python Design Patterns

Python Pillow tutorial

Python Pillow

Python Turtle tutorial

Python Turtle

Keras tutorial

Preparation

Aptitude

Verbal Ability

Interview Questions

Interview Questions

Company Interview Questions

Company Questions

Trending Technologies

Artificial Intelligence

Artificial Intelligence

AWS Tutorial

Cloud Computing

Hadoop tutorial

Data Science

Angular 7 Tutorial

Machine Learning

DevOps Tutorial

B.Tech / MCA

DBMS tutorial

Data Structures

DAA tutorial

Operating System

Computer Network tutorial

Computer Network

Compiler Design tutorial

Compiler Design

Computer Organization and Architecture

Computer Organization

Discrete Mathematics Tutorial

Discrete Mathematics

Ethical Hacking

Ethical Hacking

Computer Graphics Tutorial

Computer Graphics

Software Engineering

Software Engineering

html tutorial

Web Technology

Cyber Security tutorial

Cyber Security

Automata Tutorial

C Programming

C++ tutorial

Control System

Data Mining Tutorial

Data Mining

Data Warehouse Tutorial

Data Warehouse

RSS Feed

Solve Me First Easy Problem Solving (Basic) Max Score: 1 Success Rate: 97.74%

Simple array sum easy problem solving (basic) max score: 10 success rate: 94.54%, compare the triplets easy problem solving (basic) max score: 10 success rate: 95.84%, a very big sum easy problem solving (basic) max score: 10 success rate: 98.82%, diagonal difference easy problem solving (basic) max score: 10 success rate: 96.03%, plus minus easy problem solving (basic) max score: 10 success rate: 98.39%, staircase easy problem solving (basic) max score: 10 success rate: 98.37%, mini-max sum easy problem solving (basic) max score: 10 success rate: 94.46%, birthday cake candles easy problem solving (basic) max score: 10 success rate: 97.15%, time conversion easy problem solving (basic) max score: 15 success rate: 92.38%, cookie support is required to access hackerrank.

Seems like cookies are disabled on this browser, please enable them to open this website

algorithm for problem solving in c

  • Table of Contents
  • Course Home
  • Assignments
  • Peer Instruction (Instructor)
  • Peer Instruction (Student)
  • Change Course
  • Instructor's Page
  • Progress Page
  • Edit Profile
  • Change Password
  • Scratch ActiveCode
  • Scratch Activecode
  • Instructors Guide
  • About Runestone
  • Report A Problem
  • This Chapter
  • 1. Introduction' data-toggle="tooltip" >

Problem Solving with Algorithms and Data Structures using C++ ¶

By Brad Miller and David Ranum, Luther College, and Jan Pearce, Berea College

  • 1.1. Objectives
  • 1.2. Getting Started
  • 1.3. What Is Computer Science?
  • 1.4. What Is Programming?
  • 1.5. Why Study Data Structures and Abstract Data Types?
  • 1.6. Why Study Algorithms?
  • 1.7. Reviewing Basic C++
  • 1.8. Getting Started with Data
  • 1.9.1. Numeric Data
  • 1.9.2. Boolean Data
  • 1.9.3. Character Data
  • 1.9.4. Pointers
  • 1.10.1. Arrays
  • 1.10.2. Vectors
  • 1.10.3. Strings
  • 1.10.4. Hash Tables
  • 1.10.5. Unordered Sets
  • 1.11.1. Parameter Passing: by Value versus by Reference
  • 1.11.2. Arrays as Parameters in Functions
  • 1.11.3. Function Overloading
  • 1.12.1. A Fraction Class
  • 1.12.2. Abstraction and Encapsulation
  • 1.12.3. Polymorphism
  • 1.12.4. Self Check
  • 1.13.1. Logic Gates and Circuits
  • 1.13.2. Building Circuits
  • 1.14.1. Introduction to Turtles
  • 1.14.2. Turtle & TurtleScreen
  • 1.14.3. Geometry, Shapes, and Stamps
  • 1.14.4. Advanced Features
  • 1.15. Summary
  • 1.16. Discussion Questions
  • 1.17. Programming Exercises
  • 1.18. Glossary
  • 1.19. Matching
  • 2.1. Objectives
  • 2.2.1. Some Needed Math Notation
  • 2.2.2. Applying the Math Notation
  • 2.3. Big-O Notation
  • 2.4.1. Solution 1: Checking Off
  • 2.4.2. Solution 2: Sort and Compare
  • 2.4.3. Solution 3: Brute Force
  • 2.4.4. Solution 4: Count and Compare
  • 2.5. Performance of C++ Data Collections
  • 2.6. Analysis of Array and Vector Operators
  • 2.7. Analysis of String Operators
  • 2.8. Analysis of Hash Tables
  • 2.9. Summary
  • 2.10. Self Check
  • 2.11. Discussion Questions
  • 2.12. Programming Exercises
  • 2.13. Glossary
  • 2.14. Matching
  • 3.1. Objectives
  • 3.2. What Are Linear Structures?
  • 3.3. What is a Stack?
  • 3.4. The Stack Abstract Data Type
  • 3.5. Using a Stack in C++
  • 3.6. Simple Balanced Parentheses
  • 3.7. Balanced Symbols - A General Case
  • 3.8. Converting Decimal Numbers to Binary Numbers
  • 3.9.1. Conversion of Infix Expressions to Prefix and Postfix
  • 3.9.2. General Infix-to-Postfix Conversion
  • 3.9.3. Postfix Evaluation
  • 3.10. What Is a Queue?
  • 3.11. The Queue Abstract Data Type
  • 3.12. Using a Queue in C++
  • 3.13. Simulation: Hot Potato
  • 3.14.1. Main Simulation Steps
  • 3.14.2. C++ Implementation
  • 3.14.3. Discussion
  • 3.15. What Is a Deque?
  • 3.16. The Deque Abstract Data Type
  • 3.17. Using a Deque in C++
  • 3.18. Palindrome-Checker
  • 3.19. Summary
  • 3.20. Discussion Questions
  • 3.21. Programming Exercises
  • 3.22. Glossary
  • 3.23. Matching
  • 4.1. Objectives
  • 4.2. What Are Linked Structures?
  • 4.3. Implementing an Unordered Linked List
  • 4.4. The Node Class
  • 4.5. The Unordered Linked List Class
  • 4.6.1. Analysis of Linked Lists
  • 4.7.1. Forward lists
  • 4.7.2. Lists
  • 4.8. Summary
  • 4.9. Discussion Questions
  • 4.10. Programming Exercises
  • 4.11. Glossary
  • 4.12. Matching
  • 5.1. Objectives
  • 5.2. What Is Recursion?
  • 5.3. Calculating the Sum of a Vector of Numbers
  • 5.4. The Three Laws of Recursion
  • 5.5. Converting an Integer to a String in Any Base
  • 5.6. Stack Frames: Implementing Recursion
  • 5.7. Introduction: Visualizing Recursion
  • 5.8. Sierpinski Triangle
  • 5.9. Complex Recursive Problems
  • 5.10. Tower of Hanoi
  • 5.11. Exploring a Maze
  • 5.12. Dynamic Programming
  • 5.13. Summary
  • 5.14. Self-check
  • 5.15. Discussion Questions
  • 5.16. Programming Exercises
  • 5.17. Glossary
  • 5.18. Matching
  • 6.1. Objectives
  • 6.2. Searching
  • 6.3.1. Analysis of Sequential Search
  • 6.4.1. Analysis of Binary Search
  • 6.5.1. Hash Functions
  • 6.5.2. Collision Resolution
  • 6.5.3. Implementing the Map Abstract Data Type
  • 6.5.4. Analysis of Hashing
  • 6.6. Self Check
  • 6.7. Summary
  • 6.8. Discussion Questions
  • 6.9. Programming Exercises
  • 6.10. Glossary
  • 6.11. Matching
  • 7.1. Objectives
  • 7.2. Sorting
  • 7.3. The Bubble Sort
  • 7.4. The Selection Sort
  • 7.5. The Insertion Sort
  • 7.6. The Shell Sort
  • 7.7. The Merge Sort
  • 7.8. The Quick Sort
  • 7.9. Self Check
  • 7.10. Summary
  • 7.11. Discussion Questions
  • 7.12. Programming Exercises
  • 7.13. Glossary
  • 7.14. Matching
  • 8.1. Objectives
  • 8.2. Examples of Trees
  • 8.3. Vocabulary and Definitions
  • 8.4. Nodes and References
  • 8.5. Parse Tree
  • 8.6. Tree Traversals
  • 8.7. Priority Queues with Binary Heaps
  • 8.8. Priority Queues with Binary Heaps Example
  • 8.9. Binary Heap Operations
  • 8.10.1. The Structure Property
  • 8.10.2. The Heap Order Property
  • 8.10.3. Heap Operations
  • 8.11. Binary Search Trees
  • 8.12. Search Tree Operations
  • 8.13. Search Tree Implementation
  • 8.14. Search Tree Analysis
  • 8.15. Balanced Binary Search Trees
  • 8.16. AVL Tree Performance
  • 8.17. AVL Tree Implementation
  • 8.18. Summary of Map ADT Implementations
  • 8.19. Summary
  • 8.20. Discussion Questions
  • 8.21. Programming Exercises
  • 8.22. Glossary
  • 8.23. Matching
  • 9.1. Objectives
  • 9.2. Vocabulary and Definitions
  • 9.3. The Graph Abstract Data Type
  • 9.4. An Adjacency Matrix
  • 9.5. An Adjacency List
  • 9.6. Implementation
  • 9.7. The Word Ladder Problem
  • 9.8. Building the Word Ladder Graph
  • 9.9. Implementing Breadth First Search
  • 9.10. Breadth First Search Analysis
  • 9.11. The Knight’s Tour Problem
  • 9.12. Building the Knight’s Tour Graph
  • 9.13. Implementing Knight’s Tour
  • 9.14. Knight’s Tour Analysis
  • 9.15. General Depth First Search
  • 9.16. Depth First Search Analysis
  • 9.17. Topological Sorting
  • 9.18. Strongly Connected Components
  • 9.19. Shortest Path Problems
  • 9.20. Dijkstra’s Algorithm
  • 9.21. Analysis of Dijkstra’s Algorithm
  • 9.22. Prim’s Spanning Tree Algorithm
  • 9.23. Summary
  • 9.24. Discussion Questions
  • 9.25. Programming Exercises
  • 9.26. Glossary
  • 9.27. Matching

Acknowledgements ¶

We are very grateful to Franklin Beedle Publishers for allowing us to make the original Python version of this interactive textbook freely available. The original online version was dedicated to the memory of our first editor, Jim Leisy, who wanted us to “change the world.”

Indices and tables ¶

Module Index

Search Page

Creative Commons License

  • C Programming Home
  • ▼C Programming Exercises
  • Basic Declarations and Expressions
  • Basic Part-II
  • Basic Algorithm
  • Variable Type
  • Input - Output
  • Conditional Statements
  • Do-While Loop
  • Linked List
  • Callback function
  • Variadic function
  • Inline Function
  • File Handling
  • Searching and Sorting
  • C Programming Exercises, Practice, Solution

What is C Programming Language?

C is a general-purpose, imperative computer programming language, supporting structured programming, lexical variable scope and recursion, while a static type system prevents many unintended operations. C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs. It has since become one of the most widely used programming languages of all time, with C compilers from various vendors available for the majority of existing computer architectures and operating systems.

The best way we learn anything is by practice and exercise questions. We have started this section for those (beginner to intermediate) who are familiar with C programming.

Hope, these exercises help you to improve your C programming coding skills. Currently, following sections are available, we are working hard to add more exercises. Please refer to this page for important C snippets, code, and examples before starting the exercises. Happy Coding!

List of C Programming Exercises :

  • Basic Declarations and Expressions [ 150 Exercises with Solution ]
  • Basic Part-II [ 7 Exercises with Solution ]
  • Basic Algorithm [ 75 Exercises with Solution ]
  • Variable Type [ 18 Exercises with Solution ]
  • Input, Output [ 10 Exercises with Solution ]
  • Conditional Statement [ 26 Exercises with Solution ]
  • While Loop [ 11 Exercises with Solution ]
  • Do-While Loop [ 12 Exercises with Solution ]
  • For Loop [ 61 Exercises with Solution ]
  • Array [ 107 Exercises with Solution ]
  • Structure [ 9 Exercises with Solution ]
  • Pointer [ 22 Exercises with Solution ]
  • Linked List [ 64 Exercises with Solution ]
  • Stack [ 17 Exercises with Solution ]
  • Binary Heap (Tree-Based Structure) [ 9 Exercises with Solution ]
  • Queue [ 13 Exercises with Solution ]
  • Hash [ 10 Exercises with Solution ]
  • Tree [ 10 Exercises with Solution ]
  • Graph [ 10 Exercises with Solution ]
  • Numbers [ 38 Exercises with Solution ]
  • Math [ 38 Exercises with Solution ]
  • String [ 41 Exercises with Solution ]
  • Date Time [ 10 Exercises with Solution ]
  • Function [ 12 Exercises with Solution ]
  • Callback Function [ 11 Exercises with Solution ]
  • Variadic Function [ 8 Exercises with Solution ]
  • Inline Function [ 11 Exercises with Solution ]
  • Recursion [ 21 Exercises with Solution ]
  • File Handling [ 19 Exercises with Solution ]
  • Search and Sorting [ 31 Exercises with Solution ]
  • Challenges [ 35 exercises with solution ]
  • C Snippets [29]
  • More to Come !

[ Want to contribute to C exercises? Send your code (attached with a .zip file) to us at w3resource[at]yahoo[dot]com. Please avoid copyrighted materials.]

Do not submit any solution of the above exercises at here, if you want to contribute go to the appropriate exercise page.

List of Exercises with Solutions :

  • HTML CSS Exercises, Practice, Solution
  • JavaScript Exercises, Practice, Solution
  • jQuery Exercises, Practice, Solution
  • jQuery-UI Exercises, Practice, Solution
  • CoffeeScript Exercises, Practice, Solution
  • Twitter Bootstrap Exercises, Practice, Solution
  • C# Sharp Programming Exercises, Practice, Solution
  • PHP Exercises, Practice, Solution
  • Python Exercises, Practice, Solution
  • R Programming Exercises, Practice, Solution
  • Java Exercises, Practice, Solution
  • SQL Exercises, Practice, Solution
  • MySQL Exercises, Practice, Solution
  • PostgreSQL Exercises, Practice, Solution
  • SQLite Exercises, Practice, Solution
  • MongoDB Exercises, Practice, Solution

Follow us on Facebook and Twitter for latest update.

  • Weekly Trends and Language Statistics
  • C Data Types
  • C Operators
  • C Input and Output
  • C Control Flow
  • C Functions
  • C Preprocessors
  • C File Handling
  • C Cheatsheet
  • C Interview Questions

C Exercises – Practice Questions with Solutions for C Programming

The best way to learn C programming language is by hands-on practice. This C Exercise page contains the top 30 C exercise questions with solutions that are designed for both beginners and advanced programmers. It covers all major concepts like arrays, pointers, for-loop, and many more.

C-Exercises

So, Keep it Up! Solve topic-wise C exercise questions to strengthen your weak topics.

C Programming Exercises

The following are the top 30 programming exercises with solutions to help you practice online and improve your coding efficiency in the C language. You can solve these questions online in GeeksforGeeks IDE.

Q1: Write a Program to Print “Hello World!” on the Console.

In this problem, you have to write a simple program that prints “Hello World!” on the console screen.

For Example,

Click here to view the solution.

Q2: write a program to find the sum of two numbers entered by the user..

In this problem, you have to write a program that adds two numbers and prints their sum on the console screen.

Q3: Write a Program to find the size of int, float, double, and char.

In this problem, you have to write a program to print the size of the variable.

Q4: Write a Program to Swap the values of two variables.

In this problem, you have to write a program that swaps the values of two variables that are entered by the user.

Swap-two-Numbers

Swap two numbers

Q5: Write a Program to calculate Compound Interest.

In this problem, you have to write a program that takes principal, time, and rate as user input and calculates the compound interest.

Q6: Write a Program to check if the given number is Even or Odd.

In this problem, you have to write a program to check whether the given number is even or odd.

Q7: Write a Program to find the largest number among three numbers.

In this problem, you have to write a program to take three numbers from the user as input and print the largest number among them.

Q8: Write a Program to make a simple calculator.

In this problem, you have to write a program to make a simple calculator that accepts two operands and an operator to perform the calculation and prints the result.

Q9: Write a Program to find the factorial of a given number.

In this problem, you have to write a program to calculate the factorial (product of all the natural numbers less than or equal to the given number n) of a number entered by the user.

Q10: Write a Program to Convert Binary to Decimal.

In this problem, you have to write a program to convert the given binary number entered by the user into an equivalent decimal number.

Q11: Write a Program to print the Fibonacci series using recursion.

In this problem, you have to write a program to print the Fibonacci series(the sequence where each number is the sum of the previous two numbers of the sequence) till the number entered by the user using recursion.

FIBONACCI-SERIES

Fibonacci Series

Q12: Write a Program to Calculate the Sum of Natural Numbers using recursion.

In this problem, you have to write a program to calculate the sum of natural numbers up to a given number n.

Q13: Write a Program to find the maximum and minimum of an Array.

In this problem, you have to write a program to find the maximum and the minimum element of the array of size N given by the user.

Q14: Write a Program to Reverse an Array.

In this problem, you have to write a program to reverse an array of size n entered by the user. Reversing an array means changing the order of elements so that the first element becomes the last element and the second element becomes the second last element and so on.

reverseArray

Reverse an array

Q15: Write a Program to rotate the array to the left.

In this problem, you have to write a program that takes an array arr[] of size N from the user and rotates the array to the left (counter-clockwise direction) by D steps, where D is a positive integer. 

Q16: Write a Program to remove duplicates from the Sorted array.

In this problem, you have to write a program that takes a sorted array arr[] of size N from the user and removes the duplicate elements from the array.

Q17: Write a Program to search elements in an array (using Binary Search).

In this problem, you have to write a program that takes an array arr[] of size N and a target value to be searched by the user. Search the target value using binary search if the target value is found print its index else print ‘element is not present in array ‘.

Q18: Write a Program to reverse a linked list.

In this problem, you have to write a program that takes a pointer to the head node of a linked list, you have to reverse the linked list and print the reversed linked list.

Q18: Write a Program to create a dynamic array in C.

In this problem, you have to write a program to create an array of size n dynamically then take n elements of an array one by one by the user. Print the array elements.

Q19: Write a Program to find the Transpose of a Matrix.

In this problem, you have to write a program to find the transpose of a matrix for a given matrix A with dimensions m x n and print the transposed matrix. The transpose of a matrix is formed by interchanging its rows with columns.

Q20: Write a Program to concatenate two strings.

In this problem, you have to write a program to read two strings str1 and str2 entered by the user and concatenate these two strings. Print the concatenated string.

Q21: Write a Program to check if the given string is a palindrome string or not.

In this problem, you have to write a program to read a string str entered by the user and check whether the string is palindrome or not. If the str is palindrome print ‘str is a palindrome’ else print ‘str is not a palindrome’. A string is said to be palindrome if the reverse of the string is the same as the string.

Q22: Write a program to print the first letter of each word.

In this problem, you have to write a simple program to read a string str entered by the user and print the first letter of each word in a string.

Q23: Write a program to reverse a string using recursion

In this problem, you have to write a program to read a string str entered by the user, and reverse that string means changing the order of characters in the string so that the last character becomes the first character of the string using recursion. 

Reverse-a-String

reverse a string

Q24: Write a program to Print Half half-pyramid pattern.

In this problem, you have to write a simple program to read the number of rows (n) entered by the user and print the half-pyramid pattern of numbers. Half pyramid pattern looks like a right-angle triangle of numbers having a hypotenuse on the right side.

Q25: Write a program to print Pascal’s triangle pattern.

In this problem, you have to write a simple program to read the number of rows (n) entered by the user and print Pascal’s triangle pattern. Pascal’s Triangle is a pattern in which the first row has a single number 1 all rows begin and end with the number 1. The numbers in between are obtained by adding the two numbers directly above them in the previous row.

pascal-triangle

Pascal’s Triangle

Q26: Write a program to sort an array using Insertion Sort.

In this problem, you have to write a program that takes an array arr[] of size N from the user and sorts the array elements in ascending or descending order using insertion sort.

Q27: Write a program to sort an array using Quick Sort.

In this problem, you have to write a program that takes an array arr[] of size N from the user and sorts the array elements in ascending order using quick sort.

Q28: Write a program to sort an array of strings.

In this problem, you have to write a program that reads an array of strings in which all characters are of the same case entered by the user and sort them alphabetically. 

Q29: Write a program to copy the contents of one file to another file.

In this problem, you have to write a program that takes user input to enter the filenames for reading and writing. Read the contents of one file and copy the content to another file. If the file specified for reading does not exist or cannot be opened, display an error message “Cannot open file: file_name” and terminate the program else print “Content copied to file_name”

Q30: Write a program to store information on students using structure.

In this problem, you have to write a program that stores information about students using structure. The program should create various structures, each representing a student’s record. Initialize the records with sample data having data members’ Names, Roll Numbers, Ages, and Total Marks. Print the information for each student.

We hope after completing these C exercises you have gained a better understanding of C concepts. Learning C language is made easier with this exercise sheet as it helps you practice all major C concepts. Solving these C exercise questions will take you a step closer to becoming a C programmer.

Frequently Asked Questions (FAQs)

Q1. what are some common mistakes to avoid while doing c programming exercises.

Some of the most common mistakes made by beginners doing C programming exercises can include missing semicolons, bad logic loops, uninitialized pointers, and forgotten memory frees etc.

Q2. What are the best practices for beginners starting with C programming exercises?

Best practices for beginners starting with C programming exercises: Start with easy codes Practice consistently Be creative Think before you code Learn from mistakes Repeat!

Q3. How do I debug common errors in C programming exercises?

You can use the following methods to debug a code in C programming exercises Read the error message carefully Read code line by line Try isolating the error code Look for Missing elements, loops, pointers, etc Check error online

Please Login to comment...

Similar reads, improve your coding skills with practice.

 alt=

What kind of Experience do you want to share?

algorithm for problem solving in c

  • {{subColumn.name}}

AIMS Mathematics

algorithm for problem solving in c

  • {{newsColumn.name}}
  • Share facebook twitter google linkedin

algorithm for problem solving in c

A sim-learnheuristic algorithm for solving a capacitated dispersion problem under stochastic and non-static conditions

  • Elnaz Ghorbani 1,2 , 
  • Juan F. Gomez 2 , 
  • Javier Panadero 3 , 
  • Angel A. Juan 2 ,  , 
  • 1. Department of Computer Science, Universitat Oberta de Catalunya, 08018 Barcelona, Spain
  • 2. Research Center on Production Management and Engineering, Universitat Politècnica de València, 03801 Alcoy, Spain
  • 3. Department of Computer Architecture and Operating Systems, Universitat Autònoma de Barcelona, 08193 Bellaterra, Spain
  • Received: 11 June 2024 Revised: 30 July 2024 Accepted: 05 August 2024 Published: 16 August 2024

MSC : 68T20, 90-08, 90-10, 90Bxx, 90B36

  • Full Text(HTML)
  • Download PDF

A fundamental assumption in addressing real-world problems is acknowledging the presence of uncertainty and dynamism. Dismissing these factors can lead to the formulation of an optimal solution for an entirely different problem. This paper presents a novel variant of the capacitated dispersion problem (CDP) referred to as the stochastic and non-static CDP. The main objective of this problem is to strategically position facilities to achieve maximum dispersion while meeting the capacity demand constraint. The proposed approach combines stochastic and non-static elements, introducing a new paradigm to address the problem. This innovation allows us to consider more realistic and flexible environments. To solve this challenging problem, a novel sim-learnheuristic algorithm is proposed. This algorithm combines a biased-randomized metaheuristic (optimization component) with a simulation component (to model the uncertainty) and a machine learning component (to model non-static behavior). The non-static part works by using black box and white box mechanisms to learn the uncertainty with some related facilities' variables. Based on an extended set of traditional benchmarks for the CDP, a series of computational experiments were carried out. The results demonstrate the effectiveness of the proposed sim-learnheuristic approach for solving the CDP under non-static and stochastic scenarios.

  • capacitated dispersion problem ,
  • sim-learnheuristic ,
  • machine learning ,

Citation: Elnaz Ghorbani, Juan F. Gomez, Javier Panadero, Angel A. Juan. A sim-learnheuristic algorithm for solving a capacitated dispersion problem under stochastic and non-static conditions[J]. AIMS Mathematics, 2024, 9(9): 24247-24270. doi: 10.3934/math.20241180

Related Papers:

[1] , (2024), 100954. https://doi.org/10.1016/j.suscom.2023.100954 --> A. Ala, M. Deveci, E. A. Bani, A. H. Sadeghi, Dynamic capacitated facility location problem in mobile renewable energy charging stations under sustainability consideration, , (2024), 100954. https://doi.org/10.1016/j.suscom.2023.100954 doi:
[2] , (2024), 135. --> N. Aydin, A. Murat, B. S. Mordukhovich, Sample intelligence-based progressive hedging algorithms for the stochastic capacitated reliable facility location problem, , (2024), 135.
[3] , (2015), 53–60. https://doi.org/10.1016/j.omega.2015.02.006 --> M. Bieniek, A note on the facility location problem with stochastic demands, , (2015), 53–60. https://doi.org/10.1016/j.omega.2015.02.006 doi:
[4] , (2020), 311–334. --> M. Chica, A. A. Juan, C. Bayliss, O. Cordón, W. D. Kelton, Why simheuristics? benefits, limitations, and best practices when combining metaheuristics with simulation, , (2020), 311–334.
[5] , (2013), 1842–1851. https://doi.org/10.1016/j.cor.2013.01.017 --> B. R. Cobb, R. Rumí, A. Salmerón, Inventory management with log-normal demand per unit time, , (2013), 1842–1851. https://doi.org/10.1016/j.cor.2013.01.017 doi:
[6] , (2021), 106964. --> P. B. de Oliveira, R. S. de Camargo, G. de Miranda Júnior, A. X. Martins, A computational study of a decomposition approach for the dynamic two-level uncapacitated facility location problem with single and multiple allocation, , (2021), 106964.
[7] , (2007), 71–84. https://doi.org/10.1016/j.ejor.2006.01.021 --> A. Duarte, R. Marti, Tabu search and grasp for the maximum diversity problem, , (2007), 71–84. https://doi.org/10.1016/j.ejor.2006.01.021 doi:
[8] , (2023). --> E. Ghorbani, T. Fluechter, L. Calvet, M. Ammouriova, J. Panadero, A. A. Juan, Optimizing energy consumption in smart cities' mobility: Electric vehicles, algorithms, and collaborative economy, , (2023).
[9] , (1995), 696–701. https://doi.org/10.1016/0307-904X(95)00083-V --> F. Glover, K. Ching-Chung, K. S. Dhir, A discrete optimization model for preserving biological diversity, , (1995), 696–701. https://doi.org/10.1016/0307-904X(95)00083-V doi:
[10] , 1998, 19. --> F. Glover, C. C. Kuo, K. Dhir, Heuristic algorithms for the maximum diversity problem, , 1998, 19.
[11] , (2024), 909. https://doi.org/10.3390/math12060909 --> J. F. Gomez, A. Martínez-Gavara, J. Panadero, A. A. Juan, R. Martí, A forward-backward simheuristic for the stochastic capacitated dispersion problem, , (2024), 909. https://doi.org/10.3390/math12060909 doi:
[12] , (2022). --> J. F. Gomez, J. Panadero, R. D. Tordecilla, J. Castaneda, A. A. Juan, A multi-start biased-randomized algorithm for the capacitated dispersion problem, , (2022).
[13] , (2023), 532. https://doi.org/10.3390/a16120532 --> J. F. Gomez, A. R. Uguina, J. Panadero, A. A. Juan, A learnheuristic algorithm for the capacitated dispersion problem under dynamic conditions, , (2023), 532. https://doi.org/10.3390/a16120532 doi:
[14] , (2017), 216–228. https://doi.org/10.1016/j.cie.2017.06.019 --> A. Grasas, A. A. Juan, J. Faulin, J. de Armas, H. Ramalhinho, Biased randomization of heuristics using skewed probability distributions: A survey and some applications, , (2017), 216–228. https://doi.org/10.1016/j.cie.2017.06.019 doi:
[15] , (2016), 143–154. https://doi.org/10.1016/j.cor.2015.10.011 --> S. D. Jena, J. F. Cordeau, B. Gendron, Solving a dynamic facility location problem with partial closing and reopening, , (2016), 143–154. https://doi.org/10.1016/j.cor.2015.10.011 doi:
[16] , (2023), 294–315. https://doi.org/10.1002/net.22132 --> M. Landete, J. Peiró, H. Yaman, Formulations and valid inequalities for the capacitated dispersion problem, , (2023), 294–315. https://doi.org/10.1002/net.22132 doi:
[17] , (2023), 119856. --> Z. Lu, A. Martínez-Gavara, J. K. Hao, X. Lai, Solution-based tabu search for the capacitated dispersion problem, , (2023), 119856.
[18] , (2016), 369–383. https://doi.org/10.21914/anziamj.v57i0.8910 --> J. Maisano, A. Radchik, T. Ling, A log-normal model for demand forecasting in the national electricity market, , (2016), 369–383. https://doi.org/10.21914/anziamj.v57i0.8910 doi:
[19] , (2021), 131–146. https://doi.org/10.1007/s12293-020-00318-1 --> R. Marti, A. Martinez-Gavara, J. Sanchez-Oro, The capacitated dispersion problem: An optimization model and a memetic algorithm, , (2021), 131–146. https://doi.org/10.1007/s12293-020-00318-1 doi:
[20] , (2021), 114703. https://doi.org/10.1016/j.eswa.2021.114703 --> A. Martinez-Gavara, T. Corberan, R. Marti, Grasp and tabu search for the generalized dispersion problem, , (2021), 114703. https://doi.org/10.1016/j.eswa.2021.114703 doi:
[21] , (2010), 36–44. https://doi.org/10.1016/j.ejor.2008.12.023 --> R. Martí, M. Gallego, A. Duarte, A branch and bound algorithm for the maximum diversity problem, , (2010), 36–44. https://doi.org/10.1016/j.ejor.2008.12.023 doi:
[22] , (2016), 462–469. https://doi.org/10.1016/j.cie.2016.06.029 --> M. Marufuzzaman, R. Gedik, M. S. Roni, A benders based rolling horizon algorithm for a dynamic facility location problem, , (2016), 462–469. https://doi.org/10.1016/j.cie.2016.06.029 doi:
[23] , (2015), 261–268. https://doi.org/10.1016/j.endm.2014.11.034 --> S. Mišković, Z. Stanimirović, I. Grujičić, An efficient variable neighborhood search for solving a robust dynamic facility location problem in emergency service network, , (2015), 261–268. https://doi.org/10.1016/j.endm.2014.11.034 doi:
[24] , (2022), 105622. https://doi.org/10.1016/j.cor.2021.105622 --> N. Mladenović, R. Todosijević, D. Urošević, M. Ratli, Solving the capacitated dispersion problem with variable neighborhood search approaches: From basic to skewed vns, , (2022), 105622. https://doi.org/10.1016/j.cor.2021.105622 doi:
[25] , (2018), 78–88. --> R. H. Pearce, M. Forbes, Disaggregated benders decomposition and branch-and-cut for solving the budget-constrained dynamic uncapacitated facility location and network design problem, , (2018), 78–88.
[26] , (2021), 119–141. https://doi.org/10.1111/itor.12799 --> J. Peiró, I. Jiménez, J. Laguardia, R. Martí, Heuristics for the capacitated dispersion problem, , (2021), 119–141. https://doi.org/10.1111/itor.12799 doi:
[27] , (2013), 139–149. https://doi.org/10.1016/j.ijpe.2013.01.019 --> B. S. Pimentel, G. R. Mateus, F. A. Almeida, Stochastic capacity planning and dynamic network design, , (2013), 139–149. https://doi.org/10.1016/j.ijpe.2013.01.019 doi:
[28] , (2024), 124484. https://doi.org/10.1016/j.eswa.2024.124484 --> R. M. Rosati, A. Schaerf, Multi-neighborhood simulated annealing for the capacitated dispersion problem, , (2024), 124484. https://doi.org/10.1016/j.eswa.2024.124484 doi:
[29] , {\bf4 } (2000), 7–33. --> D. J. Rosenkrantz, G. K. Tayi, S. Ravi, Facility dispersion problems under capacity and cost constraints, , {\bf4 } (2000), 7–33.
[30] , (2021), 435–452. https://doi.org/10.1016/j.ejor.2020.08.018 --> A. Silva, D. Aloise, L. C. Coelho, C. Rocha, Heuristics for the dynamic facility location problem with modular capacities, , (2021), 435–452. https://doi.org/10.1016/j.ejor.2020.08.018 doi:
[31] , (2021), 43–56. --> L. Wang, Z. Zhang, C. Wu, D. Xu, X. Zhang, Approximation algorithms for the dynamic k-level facility location problems, , (2021), 43–56.
[32] , (2024), 587–599. https://doi.org/10.1287/ijoc.2021.0324 --> J. Xu, S. Sen, Ensemble variance reduction methods for stochastic mixed-integer programming and their application to the stochastic facility location problem, , (2024), 587–599. https://doi.org/10.1287/ijoc.2021.0324 doi:
  • This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ -->

Supplements

Access History

Reader Comments

  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/4.0 )

通讯作者: 陈斌, [email protected]

沈阳化工大学材料科学与工程学院 沈阳 110142

algorithm for problem solving in c

Article views( 83 ) PDF downloads( 6 ) Cited by( 0 )

Figures and Tables

algorithm for problem solving in c

Figures( 7 )  /  Tables( 4 )

algorithm for problem solving in c

Associated material

Other articles by authors.

  • Elnaz Ghorbani
  • Juan F. Gomez
  • Javier Panadero
  • Angel A. Juan

Related pages

  • on Google Scholar
  • Email to a friend
  • Order reprints

Export File

shu

  • Figure 1. Visualizing a SNS-CDP: Balancing distribution efficiency with resource limitations
  • Figure 2. The flowchart of the constructive heuristic algorithm
  • Figure 3. The flowchart of the MSBR algorithm
  • Figure 4. The sim-learnheuristic schema for solving the SNS-CDP
  • Figure 5. Behaviors of the black box using three different capacities and high and low coefficients' values
  • Figure 6. Comparison of solutions across different scenarios for all the instances
  • Figure 7. Comparison of percentage gaps between MSBR-Sim-ML and other solution methods

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 17 August 2024

Three layered sparse dictionary learning algorithm for enhancing the subject wise segregation of brain networks

  • Muhammad Usman Khalid 1 ,
  • Malik Muhammad Nauman 2 ,
  • Sheeraz Akram 1 &
  • Kamran Ali 2  

Scientific Reports volume  14 , Article number:  19070 ( 2024 ) Cite this article

64 Accesses

Metrics details

  • Computational biology and bioinformatics
  • Computational neuroscience
  • Image processing
  • Mathematics and computing
  • Statistical methods

Independent component analysis (ICA) and dictionary learning (DL) are the most successful blind source separation (BSS) methods for functional magnetic resonance imaging (fMRI) data analysis. However, ICA to higher and DL to lower extent may suffer from performance degradation by the presence of anomalous observations in the recovered time courses (TCs) and high overlaps among spatial maps (SMs). This paper addressed both problems using a novel three-layered sparse DL (TLSDL) algorithm that incorporated prior information in the dictionary update process and recovered full-rank outlier-free TCs from highly corrupted measurements. The associated sequential DL model involved factorizing each subject’s data into a multi-subject (MS) dictionary and MS sparse code while imposing a low-rank and a sparse matrix decomposition restriction on the dictionary matrix. It is derived by solving three layers of feature extraction and component estimation. The first and second layers captured brain regions with low and moderate spatial overlaps, respectively. The third layer that segregated regions with significant spatial overlaps solved a sequence of vector decomposition problems using the proximal alternating linearized minimization (PALM) method and solved a decomposition restriction using the alternating directions method (ALM). It learned outlier-free dynamics that integrate spatiotemporal diversities across brains and external information. It differs from existing DL methods owing to its unique optimization model, which incorporates prior knowledge, subject-wise/multi-subject representation matrices, and outlier handling. The TLSDL algorithm was compared with existing dictionary learning algorithms using experimental and synthetic fMRI datasets to verify its performance. Overall, the mean correlation value was found to be \(26\%\) higher for the TLSDL than for the state-of-the-art subject-wise sequential DL (swsDL) technique.

Similar content being viewed by others

algorithm for problem solving in c

A novel subject-wise dictionary learning approach using multi-subject fMRI spatial and temporal components

algorithm for problem solving in c

Iterative consensus spectral clustering improves detection of subject and group level brain functional modules

algorithm for problem solving in c

CW_ICA: an efficient dimensionality determination method for independent component analysis

Introduction.

During cognition or rest, fMRI measurements capture brain activity dynamics that are renowned for being noisy, which makes drawing statistical inferences a significant challenge 1 . Despite eliminating the noise sources due to machine, head motion, and physiology, the anatomical localization of neuronal responses triggered by the same stimulus might differ across brains 2 . This is further exacerbated by the experimental response’s divergence from the expected response due to variations in hemodynamics response (HR) across brains 3 . This makes hypothesis-driven univariate methodology such as the general linear model (GLM) in statistical parametric mapping (SPM) toolbox 4 an inferior approach. Mainly, the regressors for the design matrix of GLM constructed by convolving the HR function (HRF) with stimulus are incapable of adapting to variations across brains and their regions.

In contrast, a multivariate data-driven approach, for instance, blind source separation based matrix factorization, that recovers an approximation of the original signals, is fully adaptable without requiring prior knowledge about the mixing process of the training data. Diversity measures such as probabilistic distribution 5 , statistical attributes 6 , geometrical structures 7 , and sparse representation 8 have yielded several successful outcomes for fMRI analysis by unmasking hidden brain structures 9 , 10 , 11 . In particular, multivariate data matrix decomposition approaches have proven very beneficial for resting state functional connectivity studies and task-based activation identification 12 , 13 , 14 , 15 , 16 , 17 , 18 . Despite accounting for the idiosyncrasy, these constraint signal separation problems lack the estimation power for subject-wise analysis due to limited training data and the mathematical model’s inadequacy to take into consideration functional–anatomical brain differences and complexities, which other brains in group studies may offer 19 .

While group ICA 20 , multi-subject DL 21 , compressed online DL 22 , hybrid concatenation scheme for DL 23 , shared and subject-specific DL 24 , sparse group bases for DL 25 , and low-rank Tucker 2 model (LRT-2) 26 offer excellent analysis for group studies, some of them are irrelevant for resting state data or unable to yield subject-wise unique brain activity. Their model design endeavours to either (i) extract shared/individual dynamics where resting state data is complex to model due to the absence of shared temporal dynamics or (ii) aggregate dynamics to retrieve common variations where subject-wise analysis is exasperating to achieve because the model itself is ineffective.

figure 1

A flowchart describing the proposed multi-tier approach. The details of the mathematical notations that appeared in this figure are given in the method section.

Motivated by the shared neural response modeling 2 , 27 , the aforementioned problems were addressed and resolved in the recently proposed swsDL algorithm and its block variant swbDL 19 , which consisted of extracting the spatiotemporal components across brains using computationally efficient sparse spatiotemporal BSS (ssBSS) 28 method and integrating them through the multi-subject base dictionary, base sparse code, and sparse representation matrices (RMs). Combining common variations and separating unique ones from multiple subjects via sparse RMs provide statistical strength to subject-wise dictionary learning. However, due to minor segregation offered by ssBSS spatial components and the presence of background noise (outliers) in TCs, this approach may result in poor subject-wise discrimination potential for both swsDL and swbDL. One procedure to resolve these problems is to provide these algorithms with (i) atleast moderately segregated spatial activations, (ii) spatial and temporal prior information, and (iii) an outlier elimination scheme with automated suppression capability that down-weighs all those data points that are distant from the bulk of the data.

To address the inherent limitations of the BSS techniques compared to model-driven GLM, some data-driven approaches have allowed incorporating prior information 29 , 30 , 31 . Previous research has provided much evidence and discussion supporting the validity of the sparse assumption for the separation of fMRI signals 32 , 33 , 34 , 35 , and the benefits gained by including the prior information in the learning process in the form of modeled hemodynamic response functions (MHRs) and intrinsic brain networks (IBNs) 36 , 37 , 38 .

In this paper, instead of fixing the model-driven dictionary atoms to extract highly overlapped spatial maps as in the case of supervised dictionary learning 36 , a different approach is adopted where at the first two levels, multi-subject components from the ssBSS method are fed into the block-wise dictionary learning algorithm to extract subject-wise dictionary and sparse codes with moderate spatial overlaps, which are then concatenated with the normalized MHRs and IBNs, respectively. At the third level, sequential learning is performed using these concatenated dynamics. Moreover, the imposition of dictionary atom decomposition at the third level assures decomposition stability when facing variations (anomaly) from the Gaussian noise model. Here, we define the anomaly in the fMRI time series as time points showing increased noise levels, artifacts, and outliers. Anomalies may occur due to magnetic field instabilities, head movement, physiological influences, and various preprocessing steps involved in managing fMRI data.

It is noteworthy that only this three-level learning strategy can accommodate the proposed model for sequential learning given by equation ( 8 ) to perform (i) the decomposition of subject-wise data into multi-subject TCs and SMs, (ii) the incorporation of prior TCs and SMs information, and (iii) the elimination of outliers in the TCs. Analytically, it is observed that the three-level strategy enhances the separation of brain regions with high spatial overlaps. Overall, there is an increase in the likelihood of extracting source signals more closely resembling the ground truth. The three-level strategy is explained with the help of a block diagram in Fig. 1 .

Our proposed multi-factor matrix factorization approach is somewhat similar to the adaptive distribution modeling-based collaborative clustering approach (ADCoC) in regards to mathematical formulation; for instance, it decomposes the feature matrix into a pair of semi-orthogonal dictionaries and their corresponding coefficient matrix. However, there is a significant difference as our approach aims to decompose each subject’s whole-brain fMRI data into multi-subject dictionary/sparse and their corresponding representation matrices, whereas the ADCoC focuses on clustering subjects and features through non-negative matrix factorization 39 .

Methods and materials

Scalar values are indicated by lowercase or uppercase italic letters, vectors are designated by lowercase letters, and matrices are represented by uppercase letters. Vectors with subscripts and superscripts denote the particular column and row of the matrix, accordingly. Subscript m attached to a matrix/vector indicates the m th subject whose values are in this range \(m=\{1,\hdots ,M\}\) . Superscripts a , b , and c attached to a matrix/vector signify the first, second, and third level dictionary/sparse code/RMs, respectively. Subscript q attached to a matrix/vector defines a base dictionary/sparse code. Matrices \(\text {S} \in \mathbb {R}^{N\times K_3}\) and \(\text {T} \in \mathbb {R}^{N\times K_2}\) contain the IBNs and MHRs, respectively.

Why three level learning

There is no redundancy for decomposing each subject’s data using a three-level approach, as we are constantly adding features to both second and third-tier learning. In the second layer, the gain is derived from the multi-subject dynamics learned in the previous level. In the third tier, the gain is derived from preexisting neuroscience knowledge.

Why not add multi-subject features and prior information at the second level and replace three-level learning with two? This could have resulted in an inferior approach because spatial features obtained from ssBSS are not mature enough in terms of spatial localization to be on equal footing with the prior information. Dynamics that have been atleast moderately segregated would gain more by adding the prior information.

Using ssBSS at the first tier can also not be avoided due to the requirement of constructing a multi-subject base dictionary/sparse code while keeping the numerical cost minimal.

Another valid question is why not use sequential learning on both the second and third levels. The main reason to avoid this is the computing burden of sequential learning.

Why robust principle component analysis

Utilizing robust function through \(l_1\) data fidelity term 40 , a capped \(l_1\) norm loss function 41 , \(\alpha \) -divergence based data fidelity term 42 results in a computationally intensive weighted least squares (WLS) operation. Therefore, an alternative approach based on robust principal component analysis is considered and achieved using principal component pursuit (PCP).

Conventional robust learning using WLS may not work well when there is a base matrix involved because instead of the dictionary getting updated, only the representation matrix gets updated while the base dictionary keeps retaining the outliers.

Using PCP, all samples in the dictionary atoms away from the bulk of the data are automatically clipped based on a threshold value during each iteration of the sequential learning.

Three tier learning

Specifically, (i) at the first layer, the ssBSS method, for each subject, solved using alternating minimization optimization, produced preliminary spatiotemporal components that had minor spatial overlaps, (ii) at the second layer, solved using alternating minimization optimization, the swbDL algorithm exploited multi-subject bases assembled by the previous layer to produce intermediary spatial bases that were moderately but not entirely separated, and iii) at the third layer, the sequential learning algorithm solved using coordinate descent optimization first constructed the base dictionary and sparse code by concatenating subject-wise temporal and spatial components from the previous layer with MHRs and IBNs, respectively, then utilized this base dictionary to produce segregated spatial maps and outlier free temporal dynamics via alternating directions method for each subject.

First tier learning

The ssBSS method 28 was chosen at the first tier for its computational efficiency in constructing the multi-subject base dictionary for second-level learning. The first tier decomposition of fMRI data matrix \(\text {Y}_m \in \mathbb {R}^{N\times V}\) from m th subject into temporal source matrix \(\text {D}^a_m \in \mathbb {R}^{N\times P}\) and spatial source matrix \(\text {X}_m^a \in \mathbb {R}^{P\times V}\) was accomplished by considering the following dictionary learning optimization model with double sparsity 28 , 43

The smoothness of the blood-oxygen-level-dependent (BOLD) signal is explained by \(\text {D}_m^a=\text {D}_q^a\text {A}^a_m\) , which stores discrete cosine transform (DCT) bases in \(\text {D}_q^a \in \mathbb {R}^{N\times K_p}\) . The p -th column of the sparse representation matrix \(\text {A}^a_m\) , \(P<K_p<N\) , is given by \(\text {a}^a_{m,p}\) . The \(l_0\) norm, denoted as \(\left\Vert .\right\Vert _0\) , induced sparsity by quantifying the number of non-zero elements. The \(l_1\) norm on \(\text {X}^a_m\) is specified as \(\left\Vert \text {X}^a_m\right\Vert _1\) , and defined as \(\sum _{p=1}^{P} \sum _{j=1}^{V}|x^{a,p}_{m,j}|\) . Its associated sparsity hyperparameter \(\lambda _1\) controlled the values of the coefficients. The numerical burden associated with equation ( 1 ) became a significant concern when it had to be availed for second-level learning. To circumvent this issue, the blind source separation theory was used to decompose equation ( 1 ) into two distinct spatial and temporal source separation problems, enabling a computationally efficient solution as

where the matrices \(\text {F}_{m}^t \in \mathbb {R}^{N\times K}\) and \(\text {F}_{m}^s \in \mathbb {R}^{K\times V}\) obtained using principal component analysis (PCA) represented the temporal and spatial features in the reduced dimension, respectively. The mixing matrices \(\text {Q}_m \in \mathbb {R}^{P\times K}\) and \(\text {Z}_m \in \mathbb {R}^{K\times P}\) were unknowns. The sparsity regularization parameter \(\lambda _1\) was estimated using brute force with reference to correlation strength. However, cross-validation could have also been availed to fine-tune this parameter 44 . Equation ( 2 ) was solved until convergence using an alternating minimization approach for the unknown model variables via Neumann’s alternating projection lemma 45 .

Second tier learning

At this tier, the block design dictionary learning technique extracted subject-wise TCs and SMs using multi-subject dictionary/sparse codes as described in novel subject-wise learning method 19 . In the present situation, using the first tier components, the second tier base dictionary and sparse code were constructed as \(\text {D}_q^b = [\text {D}_1^a, \text {D}_2^a, \hdots , \text {D}_M^a] \in \mathbb {R}^{N\times MP}\) , and \(\text {X}_q^b = [\text {X}_1^{a^\top }, \text {X}_2^{a^\top }, \hdots , \text {X}_M^{a^\top }]^\top \in \mathbb {R}^{MP\times V}\) and their respective non-sparse representation matrices \(\text {A}_m^b \in \mathbb {R}^{MP\times K_1}\) and \(\text {B}_m^b \in \mathbb {R}^{K_1\times MP}\) were estimated using the following minimization

where tuning parameter \(\alpha \) is used to penalize the temporal correlation structure between the current \(\text {D}_m^b=\text {D}_q^b\text {A}_m^b\) and the lagged dictionary \(\text {D}_0\) , and \(\lambda _2\) is the sparsity tuning parameter. A combination of soft thresholding and least squares was used to estimate \(\text {B}_m^b\) as

Using a relaxation variable \(\text {U}\) , Eq. ( 3 ) can be reformulated as

The alternating directions method of multipliers (ADMM) technique was used to tackle this problem. It offered a closed-form solution for both \(\text {D}^b_m\) and \(\text {U}\) , required only one tuning parameter, and converged for all possible values 46 . The augmented Lagrangian for Eq. ( 5 ) lead us to the following closed-form solution for all unknown variables

where \(\beta _1\) is the tuning parameter and \(\mathscr {W}_1\) is the Lagrangian multiplier. Due to a dictionary-wide normalization restriction, every iteration involved normalizing every column in \(\text {D}^b_m\) and \(\text {U}\) . After initializing \(\text {U}\) and \(\mathscr {W}_1\) to zero, the above expressions were computed till convergence. Overall, Eq. ( 2 ) was solved using Eqs. ( 4 ) and ( 6 ) until convergence in an alternating minimization manner 19 .

Third tier learning

At third tier, the base dictionary and the base sparse code were constructed by the concatenation of subject-wise dictionary/sparse code from the previous tier and MHRs/IBNs as \(\text {D}^c_{q,m} = [\text {D}^b_m, \text {T}] \in \mathbb {R}^{N\times (K_1+K_2)}\) , and \(\text {X}^c_{q,m} = [\text {X}_m^{b^\top },\text {S}^\top ]^\top \in \mathbb {R}^{(K_1+K_3)\times V}\) . Their respective representation matrices were denoted as \(\text {A}_m^c \in \mathbb {R}^{(K_1+K_2)\times K_1}\) and \(\text {B}_m^c \in \mathbb {R}^{K_1\times (K_1+K_3)}\) . From Fig. 1 , the m th subject fMRI dataset can be decomposed as

Although \(\text {B}_m^a\) did not exist for our model, it could if we consider the point spread functions 47 along with the pre-defined network templates 48 . If there existed inverse matrices \(\text {Q}_m^{-1}\) and \(\text {Z}_m^{-1}\) that could factorize PCA features to spatial and temporal components (source matrices) such that \(K>P\) , then the above equation can be further decomposed to

From the above equation, one can draw an analogy between PCA+ICA, PCA+Canonical correlation analysis and dictionary learning. We decomposed each subject’s data into principal components, prior information, and representation (mixing) matrices using the proposed approach. Equations ( 2 ), ( 4 ), and ( 6 ) can be used to learn some of these mixing matrices, and the remaining can be learned by considering the proposed model for sequential learning. It decomposed each subject’s data into a dictionary (a superposition of a low-rank and a sparse matrix \(\text {H}_m+\Psi _m\) ) and a sparse code. This is presented as

This model may lead to redundant decomposition because we are not interested in the respective spatial maps of the sparse matrix \(\Psi _m\) , which only contains background noise. Moreover, it has been observed that the least absolute shrinkage and selection operator (LASSO) method may not be entirely efficient 49 , and its outcomes for model selection may lack consistency 50 . To resolve this, an analogous matrix version of the adaptive LASSO penalty 50 was introduced in the above equation where a data-driven regularization parameter \(\lambda ^{k_1}_{3,j}\) was allocated to each entry of \(\text {x}^{c,k_1}_{m,j}\) . This would allow the application of a more significant amount of shrinkage to entries near zero and a lesser amount to entries with substantial values, resulting in improved outcomes. In terms of sparse representation matrices, dictionary decomposition constraint in the form of \(\text {D}_m^c = \text {H}_m+\Psi _m\) , and adaptive sparse penalty for a fairer assignment of penalty to each entry in \(\text {X}_m^c\) , the above model can be reformulated as

where \(\text {D}^c_m=\text {D}^c_{q,m}\text {A}_{m}^c\) and \(\text {X}^c_m=\text {B}_{m}^c\text {X}^c_{q,m}\) . This objective function decomposed each subject’s data into third tier dictionary, sparse code, and their associated RMs, while imposing sparsity constraint on the sparse code, sparsity constraint on the RMs, and a low-rank and a sparse matrix decomposition constraint on \(\text {D}^c_m\) . This approach allowed us to remove outliers from the recovered TCs and enhance the segregation of highly overlapped spatial maps. However, Eq. ( 8 ) is not convex with respect to unknown matrices \(\text {A}_{m}^c\) , \(\text {B}_{m}^c\) , \(\text {H}_{m}\) , and \(\Psi _{m}\) but convex with respect to each of them when the others are fixed. Also, solving this problem is intractable because the unknowns \(\text {H}_m\) and \(\Psi _m\) do not appear in the cost function. Therefore, we split this objective function into two separate optimization problems by exploiting their relationship through \(\text {D}^c_m=\text {D}^c_{q,m}\text {A}_{m}^c\) given as

The PALM algorithm 51 is a classical optimization method used in sparse BSS, which iteratively updates both the dictionary and the sparse code using proximal gradient steps. It can be deployed to solve Eq. ( 9 ). It was preferred in our case due to the non-convexity (the overall optimization problem is nonconvex) and non-smoothness (contains absolute value function) of Eq. ( 9 ). Its fast convergence was guaranteed because it was shown that the sequence generated by PALM has a finite length property due to Kurdyka-Lojasiewicz (KL) inequality 51 . Whereas, the ALM algorithm, which in fewer iterations achieved higher accuracy than accelerated proximal gradient 52 can be used to solve the convex PCP problem for robust PCA given by Eq. ( 10 ) by repeatedly updating the low rank (full rank in our case, the rank can be ensured by carefully selecting the threshold value), sparse matrix and the Lagrange multiplier matrix. Specifically, ignoring the constraints on the mixing matrices, Eq. ( 9 ) according to PALM can be solved as

where \(\mathscr {S}_{\tau }(\text {x})\) denotes the shrinkage operator given as \({\textrm sgn}(\text {x})\circ (|\text {x}|-\tau )_+\) , and \(\Pi _{\left\Vert .\right\Vert _2\le 1}\) is the proximal operator of the oblique constraint given as a normalization constraint on each dictionary atom. However, for a sequential atom-wise update above equations can be re-written as

where the data-driven tuning parameters \(\lambda ^{k_1}_{3,j}\) are obtained as \(\lambda _3^{\dagger }=\lambda _3\text {1}/\Big (\text {d}_{m,k_1}^{c^\top }(\text {Y}_m-\text {D}_{m}^c\text {X}_m^{c}+\text {d}_{m,k_1}^c\text {x}_m^{c,k_1})\Big )\) . Here \(\text {1}\) is the vector of ones. On the other hand Eq. ( 10 ), which is the principal component pursuit comprising the minimization of \(l_1\) and nuclear norm 52 can be solved by considering the augmented Lagrangian as

which boils down to the following closed-form solution

where \(\mathscr {D}_{\beta }(\mathscr {M})=\text {U}\mathscr {S}_{\beta }(\Sigma )\text {V}^\top \) is a singular value thresholding operator, while \(\mathscr {M} = \text {U}\Sigma \text {V}^\top \) is the SVD of \(\mathscr {M}\) , and \(\mathscr {S}_{\tau }(x)\) denotes the shrinkage operator given as \(\textrm{sgn}(x)\circ (|x|-\tau )_+\) . This strategy would allow to decompose \(\text {D}^c_m\) during each iteration of the algorithm into low-rank and sparse components to remove gross corruptions from each of its recovered time courses.

figure 2

A block diagram of the multi-tier dictionary where first and second-tier learning extracted preliminary and intermediary components at low computational cost, enabling the third-tier learning to reconstruct the ground truth with high precision.

Group level learning

Since the proposed model (Eq. 8 ) was applied to many subjects, group-level analysis was also performed in addition to the subject-level data decomposition. This was accomplished by using neuroscience knowledge, which consisted of convolving task stimuli and the canonical HRF to construct modelled HRFs (MHRs) for task-related analysis, and acquiring resting-state network templates (RSNs) for resting-state analysis 53 . Firstly, the task-related group components were assembled by concatenating dictionary atoms/sparse codes using MHRs as \(\text {D}^c_r = [\text {d}^c_{1,j_1(r)}, \text {d}^c_{2,j_2(r)},\hdots ,\text {d}^c_{M,j_M(r)}]\) and \(\text {X}^c_r = \Big [\text {x}_1^{c,j_1(r)^\top }, \text {x}_2^{c,j_2(r)^\top },\hdots ,\text {x}_M^{c,j_M(r)^\top }\Big ]^\top \) . Secondly, the concatenated dictionary atoms and sparse codes were subjected to rank-1 decomposition, which was expressed as \( \frac{1}{M}[\text {D}^c_{r}\text {X}^c_r]=\omega _r\delta _r\gamma _r^\top \) , \(\text {d}^c_{g,r} = \omega _r\) , and \(\text {x}_{g}^{c,r} = \delta _r\gamma _r^\top \) .

Here \(r =\{1,\hdots ,R\}\) , the number of MHRs is denoted by R , the indices of the most associated atom in the m -th dictionary with the r -th MHR are indicated by \(j_m(r)\) , \(m=\{1,\hdots ,M\}\) , the group-level dictionary is \(\text {D}^c_g\) , the group-level sparse code is \(\text {X}^c_g\) , and the number of subjects is M . Alternatively, for the resting-state components, only sparse code is built as \(\Pi _r=\Big [\text {x}_1^{c,j_1(r)^\top }, \text {x}_2^{c,j_1(r)^\top }, \hdots , \text {x}_M^{c,j_M(r)^\top }\Big ]^\top \) and \(\text {x}_g^{c,r}=\frac{1}{M}\sum _{m=1}^{M} \pi _r^m\) .

Summing it all up, the TLSDL algorithm to solve the three-layered dictionary learning framework is presented in Fig. 2 . The third tier of the TLSDL algorithm for estimating robust dictionary and sparse code sequentially is described in Table 1 . In order to keep both recovered dynamics and prior information on equal footing, the MHRs, TCs, IBNs, and SMs were normalized, and the absolute values of IBNs and SMs were considered.

figure 3

In the top row, spatial sources with moderate to significant overlaps are created by varying the size of twelve distinct activation blobs using seven different values of the spread parameter \(\rho \) , of which only five are shown, the first subfigure of the second row displays the SMs of the first subject and all unique sources from other subjects, while the other subfigures illustrate the position, shape, and variability of common spatial sources among subjects, as well as distinct spatial sources, and the first subfigure of the bottom row displays the respective TCs of the first subject and all unique sources from other subjects, while the other subfigures demonstrate the temporal variability of common sources, each with its own distinct temporal pattern, and unique temporal sources in the last subfigure.

Experiments

This section evaluated and validated the proposed algorithm by comparing it to the existing state-of-the-art source retrieval data-driven algorithms. This was accomplished by means of fMRI based data analysis, for which, two different fMRI datasets were considered, (i) simulated fMRI dataset for six subjects created using Simtb toolbox 54 , and (ii) block design experimental fMRI dataset for eight participants acquired from quarter 3 release of the Human Connectome Project (HCP) 55 , 56 . Notably, the proposed algorithm is applicable to task-related and resting state fMRI data, but only task-related dataset results have been provided in the paper to avoid increasing its length. The results of the resting state dataset are submitted as part of the Supplementary Material .

The sparse group independent component analysis (sgICA) 20 , 57 , compressed online dictionary learning (CODL) 22 , adaptive consistent sequential dictionary (ACSD) 58 , sparse spatiotemporal blind source separation (ssBSS) method 28 , subject-wise sequential dictionary learning (swsDL) 19 , and subject-wise block dictionary learning (swbDL) 19 are incorporated along with the proposed three-layered sparse dictionary learning (TLSDL) in the comparison study. The effectiveness of each algorithm was assessed based on how successfully it retrieved the spatial and temporal ground truth using the synthetic and experimental fMRI datasets. All algorithms were implemented in Matlab.

Synthetic dataset generation

In line with earlier fMRI research investigations 19 , 28 , a dataset of six individuals was generated in MATLAB using the Simtb toolbox to replicate the experimental fMRI data. The synthetic fMRI dataset was produced for six individuals, using twelve distinct temporal sources, each comprising 300 timepoints with a repetition time (TR) of 1 second and twelve different spatial sources, each containing a grid of 50 by 50 voxels. The synthetic brain was mapped using the source IDs \(\{3, 6, 8, 10, 22, 23, 26, 30, 4, 12, 5, 29\}\) to produce spatial components. For each individual, a total of seven spatiotemporal sources were used to generate the fMRI data from the source signals. As elaborated in 25 , the first six spatiotemporal sources were universally seen in all six subjects, although with some degree of intersubject variability. Conversely, the last six sources were distinct, with each subject being assigned one.

figure 4

Correlation values shown with respect to GT-SMs for ( a,b ) subject-wise analysis computed over 7 source components, 6 subjects, 30 realizations, and 3 temporal noise variance cases, and ( c,d ) group-level analysis computed over 7 components, 30 realizations and 3 temporal noise variance case. In both cases, the deviation from the mean values has been plotted as error bars. \(\rho \) values along the x-axis indicate the extent of spatial overlap.

The HRF parameters, including delay, response dispersion, and undershoot, were modified with respect to shared temporal sources to introduce variability across subjects. Similarly, the variability across subjects for the common spatial maps was created by adjusting the parameters of the Gaussian distribution, namely the mean ( \(\mu \) ) and the deviation from the mean ( \(\sigma \) ). These parameters allowed for manipulation of the activations’ direction, position, and spread. This was achieved by applying a random translation with a mean of 0 and a standard deviation of 1 in both the x and y axes, a random rotation with a mean of 0 and a standard deviation of 0.9, and a random scaling with a mean of \(\rho \) and a standard deviation of 0.05, as seen in the middle row of Fig. 3 . The spread parameter, denoted by \(\rho \) , determines the structural range of the activations and generates seven unique instances of spatial overlaps. The values of \(\rho \) are \(\{3,6,9,12,15,18,21\}\) , representing the Gaussian distribution’s mean. The top row of Fig. 3 shows the spatial sources range, demonstrating a degree of interdependence from moderate to large.

The first subject contributes to the six common and one distinct sources, while the next five subjects each provide one distinct spatiotemporal sources. These sources were concatenated to create the ground truth time courses (TCs) and spatial maps (SMs) for group-level analysis, as seen in the left most column and bottom two rows of Fig. 3 serving as the primary case for the ground truth. The following subfigures illustrate the temporal and spatial sources used to generate each of the six datasets, represented by the equation \(\text {Y}_m=\sum _{i=1}^{7}(\text {tc}_i+\psi _i)(\text {sm}^i+\phi ^i)\) . The noise-generating matrices, \(\Psi \in \mathbb {R}^{300\times 7}\) and \(\Phi \in \mathbb {R}^{7\times 2500}\) , were created using Laplacian distributions with parameters \(\mathscr {L}(0,\,n_t)\) and \(\mathscr {L}(0,\,0.01)\) , respectively. All datasets were generated using Laplacian noise and some outliers were manually added at time points 60, 120, 180, 240. Subsequently, the datasets \(\text {Y}_{m=1}^M\) (with \(M=6\) ) were created and used by all source retrieval methods, contingent upon the values of \(\rho \) , \(n_t\) , and trial number.

figure 5

( A ) Spatial and temporal sources that were artificially generated, ( B ) spatial and temporal sources retrieved using swsDL, and ( C ) spatial and temporal sources restored using TLSDL. The absolute temporal and spatial correlation ( \(\gamma \) ) are shown for each source, with the total sum of correlation values on the left side. The red circles on the temporal sources derived from swsDL indicate the outliers that were effectively suppressed by TLSDL but were not reduced in intensity by swsDL.

Synthetic dataset dictionary learning

The same parameter values were used across all algorithms to provide an impartial comparison wherever feasible. In the case of sgICA and CODL, the desired number of components to be obtained was determined as 12 since they were applied to the grouped/concatenated data where 18 components were retained using the subject-wise PCA. However, in the case of other algorithms, it was acknowledged that the exact number of underlying sources for experimental fMRI data is unknown. Consequently, more components were allowed to be trained for the simulated dataset instead of learning an equal number of components as the number of generating sources. The components were configured to a value of 12 for ACSD, ssBSS, swsDL, and TLSDL, and these values were applied on a per-subject basis. These algorithms, including CODL, were executed for a total of 30 iterations. The optimal initialization for each method was determined by evaluating the initialization scenario utilizing the data, random number generator, and DCT bases separately. The ssBSS method used random numbers generated from the standard normal distribution, whereas the ACSD, swsDL, and TLSDL algorithms employed DCT bases, and CODL utilized initialization from the concatenated data.

We conducted several trials with various tuning parameter configurations. We selected the ones that produced the most favourable outcomes in terms of the similarity between the ground truth and the estimated sources. In the case of sgICA, twelve components were preserved after the second PCA reduction, although the first PCA yielded 18 components for each participant. The best values for sparsity and smoothing in the case of sgICA were found to be 3 and 50000, respectively. The tuning parameters for ssBSS were set as \(\zeta _1=60\) , \(\lambda _1=3\) , \(K_p=150\) , \(K=14\) , and \(P=7\) . The best sparsity parameter for ACSD was found to be 12. The best tuning parameters for swbDL were found to be \(\lambda _2=8\) , \(\alpha =100\) and \(K_1=12\) . The optimal tuning parameters for swsDL were determined to be \(\zeta _2=\zeta _3=24\) , \(\lambda _2=16\) , and \(K=12\) . The optimal parameters for TLSDL were found to be \(\zeta _2=\zeta _3=8\) , \(\lambda _3=8\) , \(\lambda _4=0.1\) , \(\alpha =150\) , and \(K_1=12\) .

Synthetic dataset results

The process of generating the synthetic dataset and learning from it using the participating algorithms was done several times for various instances of noise and spatial overlap to showcase the resilience and reliability of the proposed algorithm. In order to accomplish this, the experiment was repeated under the following conditions: (i) three different variance values (the square of the standard deviation) of the temporal noise, denoted as \(n_t=\{0.6, 0.9, 1.2\}\) , (ii) seven values of \(\rho \) ranging from 3 to 21, which were gradually increased to enlarge the activation overlaps as depicted in the top row of Fig. 3 , and (iii) a total of 30 trials.

Furthermore, the source recovery was conducted for both subject-level and group-level analysis. The underlying source TCs/SMs were acquired by selecting the indices with the greatest correlation values after correlating each algorithm’s trained dictionary atoms/sparse code rows with the ground truth TCs/SMs. The correlation values were calculated based on the ground truth SMs (GT-SMs) and were stored as cTC/sSM. For each of the participating algorithms and seven instances of spatial overlap, we calculated and saved mcTC/mcSM, which is the average of the cTC/cSM values over all seven spatiotemporal sources, three realizations of temporal noise, and thirty trials of the experiment. For both temporal and spatial sources, the mean of mcTC/mcSM values over 6 subjects is plotted in Fig. 4 a, b in the case of subject-wise analysis, and just the mcTC/mcSM values in the case of group-level analysis, because there was no need for the calculation of mean of correlation values over the subjects. Figure 5 shows the results of component-wise analysis related to the experiment that was repeated for \(\rho =8\) and \(n_t=0.6\) using swsDL and TLSDL to highlight the strength of the proposed algorithm. Figure 6 displays the pace at which the suggested algorithms converge and the evolution of average correlation values as a function of the number of algorithm iterations. It also shows the values recovered by the sparse matrix \(\Psi _m\) during the tenth iteration for the first subject.

figure 6

Based on the number of times the algorithm is run, the average of the ( a ) correlation values for all spatiotemporal sources and ( b ) convergence rate. The deviation from the mean values has also been plotted as error bars. Temporal background noise recovered by the sparse matrix during the tenth iteration of the algorithm is shown in subfigure ( c ).

Based on the results shown in Fig. 4 , it can be inferred that the proposed TLSDL repeatedly achieved better results than all other algorithms in various source recovery situations comprising spatial/temporal features, subject-wise/group-level analysis, and instances involving different spatial overlaps. By also taking Table 2 into account, it was revealed that TLSDL had superior recovery performance for all spatiotemporal sources without exception. Despite the rise in noise intensity and spatial overlaps, its performance remained superior to all other competing algorithms. Its competitor swsDL came in second place for the subject-wise analysis. It retained its place as the runner-up for both subject-wise and group-level analysis. Surprisingly, ssBSS, in the case of spatial source recovery, outperformed ACSD, CODL, and sgICA when spatial overlaps started to increase. Furthermore, it is worth noting that the proposed algorithm consistently exhibited a considerably higher level of performance for both subject-wise and group-level analysis without exception.

Figure 4 further revealed that the TLSDL algorithm consistently outperformed all other algorithms when it came to source recovery circumstances for both spatial/temporal feature instances and spatial overlap cases. It had the highest recovery rate, despite the fact that there was a significant degree of spatial dependency among sources and a high amount of temporal noise. It is important to note that despite the fact that its retrieval capability reduced as the number of spatial overlaps increased, it continued to retain its greater recovery performance. In contrast, the swsDL method came out on top as a runner-up, despite the fact that it had relatively lower deviations from the mean correlation value for both the spatial and temporal recovery scenarios than TLSDL. The ssBSS method, much like the swsDL algorithm, displayed reduced standard deviations; nonetheless, the findings of its mean correlation were not very spectacular. Overall, as spatial overlaps kept increasing, the TLSDL algorithm performance stayed relatively better.

figure 7

The visual cue ( A ), left toe ( B ), left finger ( C ), right toe ( D ), right finger ( E ), and tongue ( F ) movement tasks of the block design dataset were analyzed using five different methods: ( a ) CODL, ( b ) ACSD, ( c ) swbDL, ( d ) swsDL, and ( e ) TLSDL. The extracted common activation maps were thresholded at a significance level of \(p<0.001\) using random field correction. The correlation values that correspond to these spatial maps are provided in Table 3 .

Based on the information provided in Fig. 5 C, it can be inferred that TLSDL effectively eliminated all outliers and, as a result, achieved a better overall temporal correlation strength compared to swsDL. In addition to this, one can also observe that SMs recovered by TLSDL had better correlation strength because of its higher capability of segregating spatial maps.

Figure 6 c shows the background noise eliminated from dictionary atoms during the tenth iteration of the algorithm. It clearly clipped the outliers from the TCs. It can be deduced from the data shown in Fig. 6 b that both the TLSDL algorithm and the swsDL approach converged at an equal pace. This tendency was also seen in Fig. 6 a, where it was observed that the correlation strength for both TLSDL and swsDL practically stopped rising after the tenth iteration. On the other hand, when the number of iterations increased, both ACSD and ssBSS continued to demonstrate progress in source recovery, although the ssBSS technique converged slowly.

A source-wise group-level analysis was performed using 30 trials, 3 temporal noise instances, and 7 spatial dependence situations on 6 algorithms, and the findings are shown in Table 2 . In general, it is clear from this data that TLSDL emerged as the victor in the source recovery of each spatiotemporal component, with swsDL coming in as the runner-up.

Block design dataset

The motor task’s 3T MRI raw block design dataset was acquired from the HCP’s quarter 3 release. The experimental specifics may be found in the reference 19 , 55 , 56 . The dataset, which depicts the brain’s motor cortex, was obtained during a 204-second experiment. The experiment provided the participants with instructions to react to visual stimuli by tapping either their right or left fingers, pinching their right or left toes, or moving their tongues. Participants performed a specific motor task lasting 12 seconds after a three-second visual cue. A total of ten movement tasks were taken into account, including two tongue motions, movements of the left and right fingers, and movements of the left and right toes. There were a total of 13 blocks, three of which were fixation blocks lasting for 15 seconds each. Six simulated MHRs were generated using the standard hemodynamic response function and task stimuli associated with five distinct forms of movement: tongue (T), left toe (LT), right toe (RT), left finger (LF), right finger (RF), and visual type cue (VC). The purpose was to get accurate ground truth time courses (TCs). After normalisation, these six MHRs were also stored in the matrix \(\text {T}\) to be used by the propose method.

The fMRI scan for each individual was acquired using a Siemens 3 Tesla (3T) scanner. The following were the acquisition parameters: The data acquisition parameters for the MRI scan are as follows: there were 72 consecutive slices with isotropic voxels measuring \(2 \ \textrm{mm}\) each. The echo time (TE) is \(33.1 \ \textrm{ms}\) , the repetition time (TR) is \(0.72 \ \textrm{s}\) , and the field of view (FOV) is \(208 \times 180 \ \textrm{mm}\) . The flip angle (FA) is 52 \(^\circ \) , and the matrix size is 104 by 90. The slice thickness is 2 mm, and the echo spacing is \(0.58 \ \textrm{ms}\) . A total of 284 EPI volumes were collected, and the first five scans were discarded. The dataset used in our analysis comprised eight people, ages spanning from 22 to 35.

Block design dataset preprocessing

The block design dataset was preprocessed using the SPM-12 toolbox 4 , as mentioned in the paper 19 . The dataset’s preparation processes, which include masking, realignment, normalizing, and spatial smoothing, are extensively described in 25 , 59 . To mitigate motion artifacts, functional images were aligned with the first image. Subsequently, all images were subjected to spatial smoothing using a Gaussian kernel with a full-width at half-maximum (FWHM) of \(6\times 6 \times 6\) \(\textrm{mm}^3\) . This was done after spatial normalization to a Tailarach template and resampling to \(2 \times 2 \times 2\) \(\textrm{mm}^3\) voxels. Next, an endeavour was made to exclude data beyond the scalp during the masking phase. The dataset with four dimensions was then transformed and saved as a two-dimensional matrix called \(\text {Y}\) for each subject to be regarded as a complete brain dataset.

After completing this phase, a dataset \(\text {Y}\) was generated with a size of \(279 \times 236115\) for the m -th subject, where m ranges from 1 to 8. Subsequently, temporal filtering was implemented on the \(\text {Y}\) matrix derived from all individuals. This included the implementation of a low-pass filter using FWHM to eliminate high-frequency physiological noise and a high-pass filter utilizing DCT to eliminate low-frequency trends. An FWHM cutoff of 1 second and a DCT filter cutoff of 1/150 Hz were used. Following the previously explained methodology, the variable \(\text {Y}\) was normalized to have a mean of zero and a variance of one.

figure 8

The group-level activation maps using five different methods: ( a ) CODL, ( b ) ACSD, ( c ) swbDL, ( d ) swsDL, and ( e ) TLSDL that were threshold using a random field correction with a significance level of \(p<0.001\) . The activation maps were derived for six different templates: ( A ) medial visual, ( B ) occipital pole visual, ( C ) lateral visual, ( D ) default mode network, ( E ) frontoparietal right, and ( F ) frontoparietal left.The correlation values corresponding to these spatial maps are reported in Table 3 .

Block design dataset dictionary learning

We experimented with a diverse array of tuning parameter combinations. The ones that yielded the most accurate results in terms of the degree of similarity between the recovered sources and the ground truth were considered. The dictionary initialization process used concatenated data, random numbers, and DCT bases for CODL, ssBSS, and ACSD/swbDL/swsDL/TLSDL, respectively. These initializations were agreed upon after testing each for the competing algorithms. The dictionary learning algorithms were executed for 15 iterations, except for the CODL and ssBSS algorithms, which were executed for 30 iterations due to their slow convergence.

During the process of dimensionality reduction for sgICA, a total of 100 components were preserved after using PCA for the first time. Subsequently, when PCA was used for the second time, only 60 components were maintained. The exact number of components was also retrieved using sgICA. The sparsity parameter for sgICA was set to 5, while the smoothing value was set at 50000. For the ssBSS algorithm, a total of 60 components were selected from the PCA, and 40 of them were retained, and its remaining parameters were assigned the values of \(\lambda _1=16\) , \(\zeta _1=50\) , \(K_p=60\) , \(K=60\) , \(P=40\) . The CODL training process included training 70 dictionary atoms, with a sparsity parameter set to 6 and a batch size set to the data dimension. The ACSD algorithm was used to train 40 dictionary atoms for each subject, with a sparsity parameter set to 60. Both swbDL and swsDL were trained using a total of 40 atoms. The tuning parameters for swbDL were set as \(\lambda _2=12\) and \(\alpha =3000\) , while for swsDL, the tuning parameters were set as \(\zeta _2=\zeta _3=48\) and \(\lambda _2=25\) . The optimal parameters for TLSDL were found to be \(\lambda _4=0.15\) , \(\alpha =150\) , \(\lambda _3=12\) , \(\zeta _2=\zeta _3=8\) , and \(K_1=40\) .

Block design dataset results

In this section, the unavailability of task-related activation maps prompted us to use temporal analysis with six created MHRs as our chosen method of investigation. Similarly, we used Smith’s templates because resting state networks (RSNs) do not include temporal profiles. These MHRs and RSNs also became the entries of matrices \(\text {T}\) and \(\text {S}\) , respectively. Both subject-wise and group-level techniques served as the basis for the study; however, for the sake of this article, we will only be discussing group-level analysis. When doing the subject-wise analysis, the TCs and SMs for each subject obtained from ACSD, swbDL, swsDL, and TLSDL were considered. Meanwhile, group-level TCs and SMs were trained for CODL and sgICA, and the individual TCs/SMs were obtained through back reconstruction. On the other hand, the group-level TCs and SMs produced by each competing algorithm were used as a reference for the group-level evaluation. The common TCs and SMs for ACSD, swbDL, swsDL, and TLSDL were extracted for the purpose of group-level analysis. The discussions in the subsection on group-level TLSDL were used to extract these TCs and SMs. Ultimately, these common TCs were correlated with MHRs and averaged SMs were correlated with RSNs. We retained the atoms and sparse codes with the most significant correlation values. One may consult the ideas mentioned in 25 to get further information on this extraction.

Only a limited number of activation maps and temporal dynamics have been shown to avoid prolonging the study, even though there were quite a few of them for the block design dataset. Figure 7 shows a series of two-dimensional images blended to illustrate three-dimensional volume for various group-level movement activities. Similarly, the results of the resting-state study, in this instance, only represent a subset of the total. For example, refer to Fig. 8 , which presents the SMs for the first four and last two resting state network templates. According to the data shown in Table 3 , the proposed TLSDL algorithm surpassed all other algorithms in terms of overall performance. It generated atoms and sparse codes with the greatest correlation with the ground truth, while the swsDL algorithm came in second place. The TLSDL algorithm successfully recovered all task-related and resting-state networks with highest specificity.

The spatial maps shown in Fig. 7 make it abundantly evident that, in comparison to the activations discovered by other approaches, the activations discovered by swbDL, swsDL, and TLSDL are more exclusively confined to the motor area. According to Table 3 , it is possible to assert that, when considering correlation values, TLSDL yielded superior results compared to the other approaches. This was also supported graphically by Fig. 8 , demonstrating that the resting state networks recovered by TLSDL were more specific and consistent in terms of sensitivity than those generated by other methods.

The correlation strength accumulation and convergence rate of the CODL, ACSD, swbDL, swsDL, and TLSDL algorithms are shown in Fig. 9 , which is presented in terms of algorithm iterations. From this vantage point, it is clear that the TLSDL algorithm gradually gained an advantage over its contenders. The outliers recovered by the TLSDL in the sparse matrix for the first subject during the fifteenth iteration are also shown in this figure.

figure 9

The mean ( a ) correlation values, and ( b ) convergence rate for subject-wise dictionary learning calculated as a function of algorithm iterations. Temporal background noise recovered by the sparse matrix during the fifteenth iteration of the algorithm is shown in subfigure ( c ).

A novel multi-tier semi-blind (hypothesis and data-driven) approach that ameliorates subject-wise dictionary learning has been presented in this paper. Using sparse assumption, three types of learning approaches that are (i) reduced dimension, (ii) block-wise, and (iii) sequential, realized in a multi-tier framework, allowed the extraction of spatial and temporal components that were more comparable to the ground truth and produced better group-level and subject-wise atoms/sparse codes. This is probably because components with significant spatial overlaps have higher chances of segregation when the prior information and outliers removal scheme, in addition to sparse assumption, are incorporated into the model. The primary advantage of this proposed strategy is that, in contrast to swsDL, it recovers very localized fMRI networks and produces noise-free dictionary atoms. It achieves this at the cost of slightly more computational complexity. Like swsDL, it can be used for both resting-state connectivity analysis and task-related activation detection of fMRI datasets. However, this paper has not considered resting-state experimental data to avoid increasing the paper length.

Moreover, the proposed algorithm deploys a unique technique for subject-wise BSS by using the underlying spatiotemporal dynamics from several subjects and prior information in a multi-tier manner while avoiding redundancy at every level. This approach allows for the inclusion of comparable components from the reduced-dimension space in order to exploit the spatiotemporal heterogeneity offered by other subjects, resulting in increased statistical power of the recovered components, which are adequately separated from each other. When combined with prior information, these components result in even better segregation in the presence of high spatial dependence. When integrated with outlier removal scheme, the recovered TCs show promising resemblance with the ground truth. The proposed model draws an analogy between PCA+ICA and dictionary learning (PCA+RMs), emphasising the training of spatiotemporal representation matrices at three layers using PCA based components. The efficacy of the suggested approach has been shown using both synthetic and experimental fMRI datasets, and its performance is found to be robust and consistent across datasets. Empirically, it is shown that the TLSDL algorithm achieved convergence for all cases, and can be considered a plausible alternative to swsDL.

Data and code availability

The experimental fMRI datasets used in this study are open-access and shared on the human connectome website. The Matlab code implemented for this study will be available from the corresponding author upon reasonable request.

Friston, K. J. Modalities, modes, and models in functional neuroimaging. Science 326 , 399–403. https://doi.org/10.1126/science.1174521 (2009).

Article   ADS   PubMed   Google Scholar  

Huang, J. et al. Learning shared neural manifolds from multi-subject FMRI data. In EEE International Workshop on Machine Learning for Signal Processing (MLSP) . 01–06 (2022).

Aguirre, G. K., Zarahn, E. & D’esposito, M. The variability of human, BOLD hemodynamic responses. Neuroimage 8 , 360–369. https://doi.org/10.1006/nimg.1998.0369 (1998).

Article   PubMed   Google Scholar  

Penny, W., Friston, K., Ashburner, J., Kiebel, S. & Nichols, T. Statistical Parametric Mapping: The Analysis of Functional Brain Images (Academic Press, 2007).

Moussaoui, S., Brie, D., Caspary, O. & Mohammad-Djafari, A. A Bayesian method for positive source separation. In IEEE International Conference on Acoustics, Speech, and Signal Processing . vol. 5. V-485 (2004).

Hyvärinen, A. & Oja, E. Independent component analysis: Algorithms and applications. Neural Netw. 13 , 411–430. https://doi.org/10.1016/S0893-6080(00)00026-5 (2000).

Zhuang, X., Yang, Z. & Cordes, D. A technical review of canonical correlation analysis for neuroscience applications. Human Brain Mapp. 41 , 3807–3833. https://doi.org/10.1002/hbm.25090 (2020).

Article   Google Scholar  

Aharon, M., Elad, M. & Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54 , 4311–4322. https://doi.org/10.1109/TSP.2006.881199 (2006).

Article   ADS   Google Scholar  

Khalid, M. U. & Seghouane, A.-K. Constrained maximum likelihood based efficient dictionary learning for fMRI analysis. In IEEE International Symposium on Biomedical Imaging (ISBI) . 45–48 (2014).

Khalid, M. U. & Seghouane, A.-K. Unsupervised detrending technique using sparse dictionary learning for fMRI preprocessing and analysis. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . 917–921 (2015).

Khalid, M. U. & Seghouane, A.-K. Multi-subject fMRI connectivity analysis using sparse dictionary learning and multiset canonical correlation analysis. In IEEE International Symposium on Biomedical Imaging (ISBI) . 683–686 (2015).

McKeown, M. J. et al. Spatially independent activity patterns in functional MRI data during the stroop color-naming task. Proc. Natl. Acad. Sci. USA 95 , 803–810. https://doi.org/10.1073/pnas.95.3.803 (1998).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Friman, O., Borga, M., Lundberg, P. & Knutsson, H. Exploratory fMRI analysis by autocorrelation maximization. Neuroimage 16 , 454–464. https://doi.org/10.1006/nimg.2002.1067 (2002).

Khalid, M. U., Shah, A. & Seghouane, A.-K. Adaptive 2DCCA based approach for improving spatial specificity of activation detection in functional MRI. In International Conference on Digital Image Computing Techniques and Applications (DICTA) . 1–6 (2012).

Khalid, M. U. & Seghouane, A.-K. Improving functional connectivity detection in fMRI by combining sparse dictionary learning and canonical correlation analysis. In IEEE International Symposium on Biomedical Imaging . 286–289 (2013).

Shah, A., Khalid, M. U. & Seghouane, A.-K. Comparing causality measures of fMRI data using PCA, CCA and vector autoregressive modelling. In International Conference of the IEEE Engineering in Medicine and Biology Society . 6184–6187 (2012).

Khalid, M. U., Shah, A. & Seghouane, A.-K. Sparse dictionary learning for fMRI analysis using autocorrelation maximization. In Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) . 4286–4289 (2015).

Lin, W., Wu, H., Liu, Y., Lv, D. & Yang, L. A CCA and ICA-based mixture model for identifying major depression disorder. IEEE Trans. Med. Imaging 36 , 745–756. https://doi.org/10.1109/TMI.2016.2631001 (2017).

Khalid, M. U. & Nauman, M. M. A novel subject-wise dictionary learning approach using multi-subject fMRI spatial and temporal components. Sci. Rep. 13 , 20201. https://doi.org/10.1038/s41598-023-47420-1 (2023).

Calhoun, V. D., Adali, T., Pearlson, G. D. & Pekar, J. J. A method for making group inferences from functional MRI data using independent component analysis. Hum. Brain Mapp. 14 , 140–151. https://doi.org/10.1002/hbm.1048 (2001).

Article   PubMed   PubMed Central   Google Scholar  

Varoquaux, G., Gramfort, A., Pedregosa, F., Michel, V. & Thirion, B. Multi-subject dictionary learning to segment an atlas of brain spontaneous activity. Inf. Process Med. Imaging 22 , 562–573. https://doi.org/10.1007/978-3-642-22092-0_46 (2011).

Mensch, A., Varoquaux, G. & Thirion, B. Compressed online dictionary learning for fast resting-state fMRI decomposition. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI) . 1282–1285 (2016).

Iqbal, A. & Seghouane, A.-K. A dictionary learning algorithm for multi-subject fMRI analysis based on a hybrid concatenation scheme. Digital Signal Process. 83 , 249–260. https://doi.org/10.1016/j.dsp.2018.09.007 (2018).

Iqbal, A., Seghouane, A.-K. & Adali, T. Shared and subject-specific dictionary learning (ShSSDL) algorithm for multisubject fMRI data analysis. IEEE Trans. Biomed. Eng. 65 , 2519–2528. https://doi.org/10.1109/TBME.2018.2806958 (2018).

Khalid, M. U. Sparse group bases for multisubject fMRI data. IEEE Access 10 , 83379–83397. https://doi.org/10.1109/ACCESS.2022.3194651 (2022).

Han, Y. et al. Low-rank Tucker-2 model for multi-subject fMRI data decomposition with spatial sparsity constraint. IEEE Trans. Med. Imag. 41 , 667–679. https://doi.org/10.1109/TMI.2021.3122226 (2022).

Chen, P.-H. C. et al. A reduced-dimension fMRI shared response model. In Advances in Neural Information Processing Systems . Vol. 28 (Curran Associates, Inc., 2015).

Khalid, M. U., Khawaja, B. A. & Nauman, M. M. Efficient blind source separation method for fMRI using autoencoder and spatiotemporal sparsity constraints. IEEE Access 11 , 50364–50381. https://doi.org/10.1109/ACCESS.2023.3277543 (2023).

Hu, D. et al. Unified SPM-ICA for fMRI analysis. Neuroimage 25 , 746–755. https://doi.org/10.1016/j.neuroimage.2004.12.031 (2005).

Calhoun, V. D., Adali, T., Stevens, M. C., Kiehl, K. A. & Pekar, J. J. Semi-blind ICA of fMRI: A method for utilizing hypothesis-derived time courses in a spatial ICA analysis. Neuroimage 25 , 527–538. https://doi.org/10.1016/j.neuroimage.2004.12.012 (2005).

Wang, Z., Xia, M., Jin, Z., Yao, L. & Long, Z. Temporally and spatially constrained ICA of fMRI data analysis. PLoS One 9 , e94211. https://doi.org/10.1371/journal.pone.0094211 (2014).

Daubechies, I. et al. Independent component analysis for brain fMRI does not select for independence. Proc. Natl. Acad. Sci. USA 106 , 10415–10422. https://doi.org/10.1073/pnas.0903525106 (2009).

Lee, K., Tak, S. & Ye, J.C. A data-driven sparse GLM for fMRI analysis using sparse dictionary learning with MDL criterion. IEEE Trans. Med. Imaging 30 , 1076–1089 https://doi.org/10.1109/TMI.2010.2097275 (2011).

Seghouane, A.-K. & Khalid, M. U. Hierarchical sparse brain network estimation. In 2012 IEEE International Workshop on Machine Learning for Signal Processing . 1–6 (2012).

Seghouane, A.-K. & Khalid, M. U. Learning dictionaries from correlated data: Application to fMRI data analysis. In IEEE International Conference on Image Processing (ICIP) . 2340–2344 (2016).

Zhao, S. et al. Supervised dictionary learning for inferring concurrent brain networks. IEEE Trans. Med. Imaging 34 , 2036–2045. https://doi.org/10.1109/TMI.2015.2418734 (2015).

Long, Z., Liu, L., Gao, Z., Chen, M. & Yao, L. A semi-blind online dictionary learning approach for fMRI data. J. Neurosci. Methods 323 , 1–12. https://doi.org/10.1016/j.jneumeth.2019.03.014 (2019).

Morante, M., Kopsinis, Y., Theodoridis, S. & Protopapas, A. Information assisted dictionary learning for fMRI data analysis. IEEE Access 8 , 90052–90068. https://doi.org/10.1109/ACCESS.2020.2994276 (2020).

Liu, H. et al. ADCoC: Adaptive distribution modeling based collaborative clustering for disentangling disease heterogeneity from neuroimaging data. IEEE Trans. Emerg. Top. Comput. Intell. 7 , 308–318. https://doi.org/10.1109/TETCI.2021.3136587 (2023).

Lu, C., Shi, J. & Jia, J. Online robust dictionary learning. In 2013 IEEE Conference on Computer Vision and Pattern Recognition . 415–422 https://doi.org/10.1109/CVPR.2013.60 (2013).

Jiang, W., Nie, F. & Huang, H. Robust dictionary learning with capped l1-norm. In Proceedings of the 24th International Conference on Artificial Intelligence , IJCAI’15. 3590–3596 (AAAI Press, 2015).

Iqbal, A. & Seghouane, A.-K. Robust dictionary learning using \(\alpha \) -divergence. In ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . 2972–2976 https://doi.org/10.1109/ICASSP.2019.8683210 (2019).

Seghouane, A.-K. & Iqbal, A. Basis expansion approaches for regularized sequential dictionary learning algorithms with enforced sparsity for fMRI data analysis. IEEE Trans. Med. Imaging 36 , 1796–1807. https://doi.org/10.1109/TMI.2017.2699225 (2017).

Stone, M. An asymptotic equivalence of choice of model by cross-validation and Akaike’s criterion. J. R. Stat. Soc. Ser. B (Methodol.) 39 , 44–47 (1977).

von Neumann, J. Functional Operators Volume 2: The Geometry of Orthogonal Spaces. (Princeton University Press, 1951).

Lin, Z., Chen, M. & Ma, Y. The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. J. Struct. Biol. 181 , 116–127 https://doi.org/10.1016/j.jsb.2012.10.010 (2013).

Progress in Optics (Elsevier, 2008).

Vanasse, T. J. et al. BrainMap VBM: An environment for structural meta-analysis. Hum. Brain Mapp. 39 , 3308–3325. https://doi.org/10.1002/hbm.24078 (2018).

Fan, J. & Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96 (1348–1360), 3085904 (2001).

MathSciNet   Google Scholar  

Zou, H. The adaptive Lasso and its oracle properties. J. Am. Stat. Assoc. 101 , 1418–1429. https://doi.org/10.1198/016214506000000735 (2006).

Article   MathSciNet   Google Scholar  

Bolte, J., Sabach, S. & Teboulle, M. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Prog. 146 , 459–494. https://doi.org/10.1007/s10107-013-0701-9 (2014).

Candes, E. J., Li, X., Ma, Y. & Wright, J. Robust Principal Component Analysis? https://doi.org/10.48550/arXiv.0912.3599 (2009).

Smith, S. M. et al. Correspondence of the brain’s functional architecture during activation and rest. Proc. Natl. Acad. Sci. 106 , 13040–13045. https://doi.org/10.1073/pnas.0905267106 (2009).

Erhardt, E., Allen, E., Wei, Y., Eichele, T. & Calhoun, V. SimTB, a simulation toolbox for fMRI data under a model of spatiotemporal separability. NeuroImage 59 , 4160–7. https://doi.org/10.1016/j.neuroimage.2011.11.088 (2011).

Van Essen, D. C. et al. The human connectome project: A data acquisition perspective. Neuroimage 62 , 2222–2231 (2012).

Barch, D. M. et al. Function in the human connectome: Task-fMRI and individual differences in behavior. Neuroimage 80 , 169–189. https://doi.org/10.1016/j.neuroimage.2013.05.033 (2013).

Boukouvalas, Z., Levin-Schwartz, Y., Calhoun, V. D. & Adalı, T. Sparsity and independence: Balancing two objectives in optimization for source separation with application to fMRI analysis. J. Franklin Inst. 355 , 1873–1887. https://doi.org/10.1016/j.jfranklin.2017.07.003 (2018).

Seghouane, A.-K. & Iqbal, A. Consistent adaptive sequential dictionary learning. Signal Process. 153 , 300–310. https://doi.org/10.1016/j.sigpro.2018.07.018 (2018).

Khalid, M. U. Dictionary Learning Algorithms for Functional Magnetic Resonance Imaging . Ph.D. thesis, Australian National University (2015).

Download references

Acknowledgements

The work presented in the article is financially supported by Universiti Brunei Darussalam, Brunei Darussalam, through its University Research Grant scheme (grant number: UBD/RSCH/1.3/FICBF(b)/2022/019) The block design fMRI data was provided [in part] by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University.

Author information

Authors and affiliations.

College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, 11564, Riyadh, Saudi Arabia

Muhammad Usman Khalid & Sheeraz Akram

Faculty of Integrated Technologies, Universiti Brunei Darussalam, Bandar Seri Begawan, BE1410, Brunei

Malik Muhammad Nauman & Kamran Ali

You can also search for this author in PubMed   Google Scholar

Contributions

Idea, Conception, Methods, Implementation: Muhammad Usman Khalid Experiments: Sheeraz Akram Supervision, Management: Malik Muhammad Nauman Funding, Management: Kamran Ali All authors reviewed the manuscript.

Corresponding author

Correspondence to Kamran Ali .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Khalid, M.U., Nauman, M.M., Akram, S. et al. Three layered sparse dictionary learning algorithm for enhancing the subject wise segregation of brain networks. Sci Rep 14 , 19070 (2024). https://doi.org/10.1038/s41598-024-69647-2

Download citation

Received : 14 March 2024

Accepted : 07 August 2024

Published : 17 August 2024

DOI : https://doi.org/10.1038/s41598-024-69647-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

algorithm for problem solving in c

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Ensure High-Quality Data Powers Your AI

  • Thomas C. Redman

algorithm for problem solving in c

The utility and impact of this technology depends on the data that goes into it.

AI does not need to fail on a global scale to cause enormous damage — to individuals, companies, and societies. Models frequently get things wrong, hallucinate, drift, and can collapse. Good AI comes from good data, but data quality is an enormous organization-wide issue (and opportunity), yet most companies have neglected it. Companies need to understand the nuances of the problem they’re trying to solve, get the data right (both by having the right data for that problem and by ensuring that the data is error-free), assign responsibility for data quality in the short term, and then push quality efforts upstream in the longer-term.

Twenty years ago, mortgage-backed securities and collateralized debt obligations were all the rage. These new financial products were, initially, a wonder: they helped put millions of people into homes and make billions for banks. Then things went horribly wrong, and they nearly tanked the global economy.

algorithm for problem solving in c

  • Thomas C. Redman , “the Data Doc,” is President of Data Quality Solutions . He helps companies and people  chart their courses to data-driven futures with special emphasis on quality, analytics, and organizational capabilities. His latest book, People and Data: Uniting to Transform Your Organization (Kogan Page) was published Summer 2023.

Partner Center

A lightweight semantic segmentation algorithm integrating CA and ECA-Net modules

  • Published: 19 August 2024
  • Volume 20 , pages 568–576, ( 2024 )

Cite this article

algorithm for problem solving in c

  • Zhihao Guo 1 ,
  • Dongmei Ma 1 &
  • Xiaoyun Luo 1  

Aiming at the existing semantic segmentation process due to the loss of pixel features and the complexity of calculating too many parameters, which leads to unsatisfactory segmentation results and too long time, this paper proposes a lightweight semantic segmentation algorithm based on the fusion of multiple modules. The algorithm is based on the pyramid scene parsing network (PSPNet). Firstly, MobileNetV2 network is chosen as the feature extraction network to construct the lightweight network structure. In the training of the network, a freeze and thaw method is used, and the Focal Loss function is added to balance the proportion of positive and negative samples. After that, spatial and channel reconstruction convolution (SCConv) is introduced in the pyramid pooling module to reduce the segmentation task. The computational cost due to redundant feature extraction is reduced. Finally, the coordinate attention (CA) and the efficient channel attention network (ECA-Net) are incorporated to make the multi-modules integrate with each other to enhance the salient features and improve the segmentation accuracy. Through the ablation and comparison experiments, the average pixel accuracy on PASCAL VOC 2012 dataset reaches 85.23%, the computation amount is reduced by 45.79%, and the training speed is improved by 68.69%. The average pixel accuracy on Cityscapes dataset reaches 86.75%, the average intersection and merger ratio reaches 73.86%, and the interaction of multiple modules with correlation performance makes the algorithm improved and optimized, effectively solving the problems of low segmentation accuracy and slow training speed in the algorithm, which has a significant accuracy advantage in the lightweight model, and can generally improve the efficiency of image semantic segmentation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Explore related subjects

  • Artificial Intelligence

ASGARI T S, ABHISHEK K, COHEN J P, et al. Deep semantic segmentation of natural and medical images: a review[J]. Artificial intelligence review, 2021, 54(1): 137–178.

Article   Google Scholar  

YU H, YANG Z, TAN L. Methods and datasets on semantic segmentation: a review[J]. Neurocomputing, 2018, 304: 82–103.

BI L, KIM J, AHN E. Dermoscopic image segmentation via multistage fully convolutional networks[J]. IEEE transactions on biomedical engineering, 2017, 64(9): 2065–2074.

SIDDIQUE N, PAHEDING S, ELKIN C P. U-Net and its variants for medical image segmentation: a review of theory and applications[J]. IEEE access, 2021, 9: 82031–82057.

YUAN W, WANG J, XU W. Shift pooling PSPNet: rethinking PSPNet for building extraction in remote sensing images from entire local feature pooling[J]. Remote sensing, 2022, 14(19): 4889.

Article   ADS   Google Scholar  

YU D, XU Q, GUO H. An efficient and lightweight convolutional neural network for remote sensing image scene classification[J]. Sensors, 2020, 20(7): 1999.

CAO J, TIAN X, CHEN Z. Ancient mural segmentation based on a deep separable convolution network[J]. Heritage science, 2022, 10(1): 11.

ÖZTÜRK C, TAŞYÜREK M, TÜRKDAMAR M U. Transfer learning and fine-tuned transfer learning methods’ effectiveness analyse in the CNN-based deep learning models[J]. Concurrency and computation: practice and experience, 2023, 35(4): e7542.

LIN T Y, GOYAL P, GIRSHICK R. Focal loss for dense object detection[J]. IEEE transactions on pattern analysis and machine intelligence, 2020, 42(2): 318–327.

HOU Q, ZHOU D, FENG J. Coordinate attention for efficient mobile network design[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 20–25, 2021, Nashville, TN, USA. New York: IEEE, 2021: 9577301.

Google Scholar  

HAN G, ZHANG M, WU W. Improved U-Net based insulator image segmentation method based on attention mechanism[J]. Energy reports, 2021, 7: 210–217.

YANG Q, KU T, HU K. Efficient attention pyramid network for semantic segmentation[J]. IEEE access, 2021, 9: 18867–18875.

WEI H B, YUN J, JIA X L, et al. In-situ detection method of Jellyfish based on improved faster R-CNN and FP16[J]. IEEE access, 2023, 11: 81803–81814.

LI H, LU H, LI X. Mortar-FP8: morphing the existing FP32 infrastructure for high performance deep learning acceleration[J]. IEEE transactions on computer-aided design of integrated circuits and systems, 2023: 1–1.

CORDTS M, OMRAN M, RAMOS S, et al. The Cityscapes dataset for semantic urban scene understanding[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 27–30, 2016, Las Vegas, NV, USA. New York: IEEE, 2016: 7780719.

YONG L, MA L, SUN D, et al. Application of MobileNetV2 to waste classification[J]. PLOS one, 2023, 18(3): e0282336.

LU J, LEE S H, KIM I W, et al. Small foreign object detection in automated sugar dispensing processes based on lightweight deep learning networks[J]. Electronics, 2023, 12(22): 4621.

WANG C, ZHONG C. Adaptive feature pyramid networks for object detection[J]. IEEE access, 2021, 9: 107024–107032.

Download references

Author information

Authors and affiliations.

School of Physics and Electronic Engineering, Northwest Normal University, Lanzhou, 730070, China

Zhihao Guo, Dongmei Ma & Xiaoyun Luo

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Dongmei Ma .

Ethics declarations

Conflicts of interest.

The authors declare no conflict of interest.

Additional information

This work has been supported by the National Natural Science Foundation of China (No. 61961037).

MA Dongmei is an associate professor in the School of Physics and Electronic Engineering at Northwest Normal University. She received her Ph.D. degree in Optical Engineering from Xi’an Institute of Optical Precision Machinery, Chinese Academy of Sciences in 2009. Her research interests are mainly in image processing and optoelectronic signaling.

Rights and permissions

Reprints and permissions

About this article

Guo, Z., Ma, D. & Luo, X. A lightweight semantic segmentation algorithm integrating CA and ECA-Net modules. Optoelectron. Lett. 20 , 568–576 (2024). https://doi.org/10.1007/s11801-024-3241-z

Download citation

Received : 04 November 2023

Revised : 08 April 2024

Published : 19 August 2024

Issue Date : September 2024

DOI : https://doi.org/10.1007/s11801-024-3241-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Document code

  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Solved Assignment Problems in C++ (with Algorithm and Flowchart

    algorithm for problem solving in c

  2. Problem Solving Through Programming in C

    algorithm for problem solving in c

  3. Examples of Algorithms and Flowcharts in C

    algorithm for problem solving in c

  4. Problem-solving algorithm

    algorithm for problem solving in c

  5. What Is Algorithm And Flowchart In C Programming

    algorithm for problem solving in c

  6. 01

    algorithm for problem solving in c

COMMENTS

  1. How to Use Algorithms to Solve Problems?

    End - End the execution. Let's take some examples of algorithms for computer science problems. Example 1. Swap two numbers with a third variable. Step 1: Start. Step 2: Take 2 numbers as input. Step 3: Declare another variable as "temp". Step 4: Store the first variable to "temp". Step 5: Store the second variable to the First variable.

  2. C Basic Algorithm: Exercises, Practice, Solution

    Go to the editor] 1. Write a C program to compute the sum of the two input values. If the two values are the same, then return triple their sum. Expected Output: 3 12. Click me to see the solution. 2. Write a C program that will take a number as input and find the absolute difference between the input number and 51.

  3. PDF Principles of Algorithmic Problem Solving

    algorithmic problem solving rose in popularity with the largest competitions attracting tens of thousands of programmers. While its mathematical coun-terpart has a rich literature, there are only a few books on algorithms with a strong problem solving focus. The purpose of this book is to contribute to the literature of algorithmic prob-

  4. Problem Solving Through Programming in C

    Table of Contents. Introduction to Problem Solving Through Programming in C. Problem Solving Through Programming in C. Steps to Solve a Problem With the Computer. Step 1: Understanding the Problem: Step 2: Analyzing the Problem: Step 3: Developing the solution: Step 4: Coding and Implementation: Problem Solving Steps.

  5. Algorithms Tutorial

    Algorithms Tutorial. Algorithm is a step-by-step procedure for solving a problem or accomplishing a task. In the context of data structures and algorithms, it is a set of well-defined instructions for performing a specific computational task. Algorithms are fundamental to computer science and play a very important role in designing efficient ...

  6. Algorithm in C Language

    What is Algorithm? An algorithm is a set of well-defined instructions to solve a particular problem. It is a step-by-step procedure to execute some instructions in a certain order to get the required output. Software Engineer commonly uses an algorithm for planning and solving problems. Following are the characteristics of an Algorithm:

  7. Algorithm in C Language

    This C code demonstrates an iterative algorithm for calculating the factorial of a non-negative integer. It initializes a result variable to 1 and uses a loop to multiply the result by each integer from 1 to n. The final result is then displayed. In summary, an algorithm is a set of clear, finite, and ordered steps that guide the solution of a ...

  8. 4. Problem Solving and Algorithms

    The development of an algorithm (a plan) is a key step in solving a problem. Once we have an algorithm, we can translate it into a computer program in some programming language. Our algorithm development process consists of five major steps. Step 1: Obtain a description of the problem. Step 2: Analyze the problem.

  9. PDF Chapter 3: Algorithmic Problem Solving

    An algorithm, whose characteristics will be discussed later, is a form that embeds the complete logic of the solution. Its formal written version is called a program, or code. Thus, algorithmic problem solving actually comes in two phases: derivation of an algorithm that solves the problem, and conversion of the algorithm into code.

  10. Learn Problem solving in C

    Problem solving in C. Learn problem solving in C from our online course and tutorial. You will learn basic math, conditionals and step by step logic building to solve problems easily. 4.5 (1413 reviews)

  11. PDF Problem Solving with Algorithms and Data Structures

    of the problem-solving process. Given a problem, a computer scientist's goal is to develop an algorithm, a step-by-step list of instructions for solving any instance of the problem that might arise. Algorithms are finite processes that if followed will solve the problem. Algorithms are solutions.

  12. What is Algorithm

    Definition of Algorithm. The word Algorithm means " A set of finite rules or instructions to be followed in calculations or other problem-solving operations ". Or. " A procedure for solving a mathematical problem in a finite number of steps that frequently involves recursive operations".

  13. What is an Algorithm?

    In computer programming terms, an algorithm is a set of well-defined instructions to solve a particular problem. It takes a set of input (s) and produces the desired output. For example, An algorithm to add two numbers: Take two number inputs. Add numbers using the + operator. Display the result.

  14. C Algorithms

    This tutorial series show you how to implement the most common algorithms in C including sorting and searching. Section 1. Sorting. Selection Sort - learn about how the selection sort works. Heapsort - explain to you the heap sort algorithm. Quicksort - guide you on the quick sort algorithm. Section 2.

  15. Problem Solving Through Programming In C

    Learners enrolled: 29073. ABOUT THE COURSE : This course is aimed at enabling the students to. Formulate simple algorithms for arithmetic and logical problems. Translate the algorithms to programs (in C language) Test and execute the programs and correct syntax and logical errors. Implement conditional branching, iteration and recursion.

  16. Problem-Solving Through Programming In C

    An algorithm is the core structure of your C program. Let's solve a simple C Programming problem together, Find the area and the circumference of a circle. Try developing an algorithm for this program, Step 1: Start Step 2: Read r Step 3: Calculate A=3.14*r*r C=2*3.14*r Step 4: Print A, C Step 5: Stop. Convert Algorithms into Programs (in C ...

  17. Data Structures and Algorithms Problems

    Data Structures and Algorithms Problems. 1. Find a pair with the given sum in an array ↗ Easy. 2. Check if a subarray with 0 sum exists or not ↗ Medium. 3. Print all subarrays with 0 sum ↗ Medium. 4. Sort binary array in linear time ↗ Easy.

  18. Algorithm in C language

    Algorithm in C language. An algorithm is a sequence of instructions that are carried out in a predetermined sequence in order to solve a problem or complete a work. A function is a block of code that can be called and executed from other parts of the program. A set of instructions for resolving an issue or carrying out a certain activity.

  19. Solve Algorithms

    The true test of problem solving: when one realizes that time and memory aren't infinite.

  20. Problem Solving with Algorithms and Data Structures using C++

    Problem Solving with Algorithms and Data Structures using C++ by Bradley N. Miller, David L. Ranum, and Janice L. Pearce is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

  21. C programming Exercises, Practice, Solution

    C is a general-purpose, imperative computer programming language, supporting structured programming, lexical variable scope and recursion, while a static type system prevents many unintended operations. C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs.

  22. Top 20 Algorithms Problems for Coding Interviews with Solutions

    It's easy, binary search is a divide and conquer algorithm, where the problem is divided into sub-problem, and those are solved. It's a search algorithm, which means it is used to find things ...

  23. C Exercises

    Q2: Write a Program to find the Sum of two numbers entered by the user. In this problem, you have to write a program that adds two numbers and prints their sum on the console screen. For Example, Input: Enter two numbers A and B : 5 2. Output: Sum of A and B is: 7.

  24. Solving issues causing late school bus runs in Charlotte, NC

    CHARLOTTE, N.C. — Thousands of Charlotte-Mecklenburg Schools students who ride the bus showed up to school late last school year, a WCNC Charlotte investigation found. District records show some ...

  25. They Had Different Problem-Solving Styles and a Modern House to Restore

    Michael Hillman likes solving problems. In his 16 years at Apple, he designed some of the company's best known products and devices. He has about 60 patents in his name—14 for the iMac ...

  26. A many-objective evolutionary algorithm based on three states for

    Zhang et al 18 believed that the loss of selection pressure was the core reason for the poor performance of the algorithm. In order to solve this problem, they proposed a many-objective ...

  27. A sim-learnheuristic algorithm for solving a capacitated dispersion

    A fundamental assumption in addressing real-world problems is acknowledging the presence of uncertainty and dynamism. Dismissing these factors can lead to the formulation of an optimal solution for an entirely different problem. This paper presents a novel variant of the capacitated dispersion problem (CDP) referred to as the stochastic and non-static CDP.

  28. Three layered sparse dictionary learning algorithm for ...

    Also, solving this problem is intractable because the unknowns \(\text {H}_m\) ... Table 1 Algorithm for solving the minimization problem . Full size table. Group level learning.

  29. Ensure High-Quality Data Powers Your AI

    Companies need to understand the nuances of the problem they're trying to solve, get the data right (both by having the right data for that problem and by ensuring that the data is error-free ...

  30. A lightweight semantic segmentation algorithm integrating CA ...

    The average pixel accuracy on Cityscapes dataset reaches 86.75%, the average intersection and merger ratio reaches 73.86%, and the interaction of multiple modules with correlation performance makes the algorithm improved and optimized, effectively solving the problems of low segmentation accuracy and slow training speed in the algorithm, which ...