Breaking Up is Hard to Do
When writing concurrent code, we use locks to protect data. But how do we decide what data is to be protected by which lock, or, for that matter, how many locks we should provide? After all, too few locks limits performance due to lock contention, while too many locks limits performance due to lock-acquisition overhead, and, worse yet, invites deadlock.
This locking dilemma is a special case of a larger problem. Concurrency requires that we break up the required processing into separate tasks, and that these separate tasks be able to communicate as needed. To paraphrase Douglas Schmidt, how do we partition the “”big ball of mud”” that all too aptly describes many sequential programs?
This talk will give an overview of this process, with the goal of making breaking up easier to do. Breaking up of software, anyway. This talk will leave other types of break-ups to wiser heads.

Paul MCKENNEY
Meta
Paul E. McKenney has been coding for more than five decades, more than half of that on parallel hardware. Paul is a software engineer at Meta Platforms, where he maintains the RCU implementation within the Linux kernel, where the variety of workloads present highly entertaining performance, scalability, real-time response, and energy-efficiency challenges. Prior to that, he did very similar work for IBM’s Linux Technology Center, before which he worked on the DYNIX/ptx kernel at Sequent, and prior to that on packet-radio and Internet protocols (but long before it was polite to mention Internet at cocktail parties), system administration, business applications, and real-time systems. His hobbies include what passes for running at his age (“hiking”) along with the usual house-wife-and-grown-kids habit.