- Threads and concurrency linux
- Threads and concurrency linux
- 2. Overview of Multithreading
- 2.1 Thread creation
- 3. Recycling of threads
- 4. Code example
- 5. Screenshot of program running
- 6. Summary
- 1. Concurrency and competition
- 2. Shared resource protection
- 1. Atomic operation
- 1. Atomic shaping operation API function
- 2. Atomic bit manipulation API function
- 2. Spin lock
- 1. Spin lock API function
- 2. Self-rotating deadlock
- 3. Precautions for spin lock
- 3. Semaphore
- 1. Semaphore characteristics
- 2. Semaphore API function
- 4. Mutex
- 1. Features of Mutex
Threads and concurrency linux
позднее он в цикле инициализируется объектом (тоже в main)
>> arg = new DATA;
но удаляется объект почемуто из функций потоков
>> delete a; // удаляем свои данные
причем удаление происходит После разблокирования мутекса!
Так делать очень опасно!
можете легко прибить чужой объект или вычитать чужие данные (чтение тоже выполняется вне лока мутекса почемуто)
Безопасные варианты:
1) создать массив указателей
DATA *arg[SIZE_I][SIZE_J]
и отдавать каждому потоку указатель на индивидуальный кусок памяти.
2)выполнять все операции с общей памятью только при заблокированном мутексе (тоесть в потоках чтение, модификация и освобождение общей памяти должно быть внутри общего pthread_mutex_lock(&lock)).
3) удалять объекты только в том потоке который их создал.
Код статьи в качестве примера использовать не рекомендуется ибо в нем автор наступил на те грабли от которых по идее должен уберечь читателей.
3, Romzzzec ( ? ), 12:48, 14/09/2010 [ответить] | + / – |
>>pthread_create(&thr[i+z], NULL, input_thr, (void *)arg); как я понял должны создаваться потоки для ввода каждого элемента матрицы, но >>//Ожидаем завершения всех потоков при Источник Threads and concurrency linuxThreads are similar to processes. Like processes, threads are managed by the kernel in time slices. In a single-processor system, the kernel uses time slicing to simulate the concurrent execution of threads in the same way as processes. In a multi-processor system, as with multiple processes, threads can actually execute concurrently. Pre-knowledge: 2. Overview of MultithreadingSo why is multithreading superior to multiple independent processes for most cooperative tasks? This is because threads share the same memory space. Different threads can access the same variable in memory. Therefore, all threads in the program can read or write the declared global variables. If you have ever used fork() to write important code, you will realize the importance of this tool. why? Although fork() allows multiple processes to be created, it also brings the following communication problems: How to make multiple processes communicate with each other, where each process has its own independent memory space. There is no simple answer to this question. Although there are many different kinds of local IPC (inter-process communication), they all encounter two important obstacles: they impose some form of additional kernel overhead, which reduces performance. 2.1 Thread creationDifferent from the fork() call to create a process, the thread created by pthread_create() does not have the same execution sequence as the main thread (that is, the thread that calls pthread_create()), but makes it run the start_routine(arg) function. PS: In addition, pay attention to adding the -lpthread parameter when compiling to call the static link library. Because pthread is not the default library of the Linux system parameter:
return value:
3. Recycling of threadsThe thread resources of the program in the Linux system are limited, which means that the number of threads that can run simultaneously for a program is limited. By default, after a thread ends, its corresponding resources will not be released. Therefore, if a thread is repeatedly created in a program and the thread exits by default, the thread resources will eventually be exhausted and the process will no longer Can create new threads. Recovery method:
4. Code examplePS: Pay attention to adding the -lpthread parameter when compiling to call the static link library. Because pthread is not the default library of the Linux system. The client side is the same as the previous article, no changes are required. 5. Screenshot of program runningUse XShell to link the virtual machine and simulate the client side. Multiple clients: 6. SummaryThe article starts with the concept of thread, to create a child thread, and then recycle the child thread. Multithreading is relatively simple compared to multiprocessing. Recycling is also relatively simple. Источник 1. Concurrency and competitionThe Linux system is a multitasking operating system. There will be multiple tasks accessing the same memory area at the same time, and competition will also occur. These tasks may overwrite the data in this section of memory and cause memory data confusion. This problem must be dealt with. In severe cases, the system may crash. The reasons for the concurrency of the current Linux system are very complicated. To sum up, there are several main reasons:
②. Preemptive concurrent access. Starting from the 2.6 kernel, the Linux kernel supports preemption, which means that the scheduler can preempt the running thread at any time to run other threads. ③. Interrupt program concurrent access. Needless to say, students who have studied STM32 should know that the right of hardware interruption is great. ④, SMP (multi-core) concurrent access between cores, multi-core SOC of ARM architecture is very common now, multi-core CPU has concurrent access between cores. 2. Shared resource protectionIn order to protect shared resources, the Linux kernel provides several concurrency and competition processing methods 1. Atomic operationAtomic operations refer to operations that cannot be further divided. Generally atomic operations are used for variable or bit operations 1. Atomic shaping operation API functionThe Linux kernel defines a structure called atomic_t to complete the atomic operation of shaping data. In use, atomic variables are used to replace the shaping variables. This structure is defined ininclude/linux/types.h File There are atomic variables, the next step is to operate on atomic variables, such as read, write, increase, decrease, etc. The Linux kernel provides a large number of atomic operation API functions, as shown in the following table: 2. Atomic bit manipulation API functionBit operations are also very common operations. The Linux kernel also provides a series of atomic bit operations API functions, but atomic bit operations do not have an atomic_t data structure like atomic integer variables. Atomic bit operations operate directly on memory. 2. Spin lockAtomic operations can only protect integer variables or bits, but how can there be only critical regions as simple as integer variables or bits in the actual use environment. Atomic operations on complex structure data are not competent, and the lock mechanism in the Linux kernel is needed. When a thread wants to access a shared resource, it must first acquire the corresponding lock. The lock can only be held by one thread. As long as this thread does not release the held lock, other threads cannot acquire the lock. For the spin lock, if the spin lock is being held by thread A and thread B wants to acquire the spin lock, then thread B will be inBusy loop-spinning-waiting state, Thread B will not enter the dormant state or do other processing, but will always be foolishly waiting for the lock to be available in a «circle».The «spin» of the spin lock also means «spin in place». The purpose of «spin in place» is to wait for the spin lock to be available and to access shared resources.. From here we can seeOne disadvantage of the spin lock: the thread waiting for the spin lock will always be in the spinning state, which will waste processor time and reduce system performance, so the holding time of the spin lock cannot be too long. Therefore, spin locks are suitable for short-term lightweight locking. If you encounter a scenario that requires a long time to hold the lock, you need to change other methods. The Linux kernel uses the structure spinlock_t to represent the spin lock, and the structure definition is as follows: 1. Spin lock API function 2. Self-rotating deadlockThe API functions in the above table are used for concurrent access between threads. If the interrupt also needs to be inserted at this time, and the interrupt also wants to access shared resources, what should be done? The first thing to be sure is that spin locks can be used in interrupts, butWhen using a spin lock in an interrupt, you must first disable the local interrupt (that is, the CPU interrupt, for a multi-core SOC, there will be multiple CPU cores) before acquiring the lock, otherwise it may cause the lock phenomenon to occur Thread A runs first and acquires the lock lock. When thread A runs the functionA function, an interrupt occurs, and the interrupt takes away the CPU usage rights. The interrupt service function on the right also acquires the lock lock, but this lock is occupied by thread A, and the interrupt will continue to spin, waiting for the lock to be valid. But before the interrupt service function is executed, thread A is impossible to execute. Thread A says «Let go first» and the interrupt says «Let go first». The scene is so stalemate and deadlock occurs! The best solution is to close the local interrupt before acquiring the lock. The Linux kernel provides corresponding API functions, as shown in the following table.Generally use spin_lock_irqsave/spin_unlock_irqrestore in threads, and spin_lock/spin_unlock in interrupts 3. Precautions for spin lock
② Any API function that may cause the thread to sleep in the critical area protected by the spin lock cannot be called, otherwise it may cause a deadlock. ③. You cannot apply for a spin lock recursively, because once you apply for a lock you are holding through recursion, you must «spin» and wait for the lock to be released. However, you are in a «spin» state, and there is no Method to release the lock. The result is that I locked myself dead! ④. We must consider the portability of the driver when writing the driver. Therefore, no matter whether you use a single-core or multi-core SOC, use it as a multi-core SOC to write the driver. 3. Semaphore1. Semaphore characteristics
②. Therefore, the semaphore cannot be used in the interrupt, because the semaphore will cause sleep, and the interrupt cannot sleep. ③. If the holding time of shared resources is relatively short, it is not suitable to use semaphore, because the overhead caused by frequent sleep and thread switching is far greater than the advantage of semaphore. 2. Semaphore API functionThe Linux kernel uses the semaphore structure to represent the semaphore, and the content of the structure is as follows: 4. Mutex1. Features of MutexSet the value of the semaphore to 1 to use the semaphore for mutual exclusion access. Although mutual exclusion can be achieved through the semaphore, Linux provides a more professional mechanism for mutual exclusion than the semaphore. It is a mutex mutex. Mutex access means that only one thread can access shared resources at a time and cannot recursively apply for a mutex. It is recommended to use mutex when we encounter places that require exclusive access when writing Linux drivers. Before using mutex, define a mutex variable. Pay attention to the following points when using mutex:
② Like the semaphore, the critical section protected by mutex can call the API function that causes the blocking. ③. Because only one thread can hold a mutex at a time, the mutex must be released by the holder of the mutex. And mutex cannot recursively lock and unlock. Источник |