In March, Microsoft;s Exterior Analysis Crew place out a request for proposal (RFP) for three-year study tasks in multicore computing. On July 28, the opening day of its annual Analysis Faculty Summit, Microsoft announced how and where it'll be spending its grant money.
Seven academic investigation tasks will share the $1.five million Microsoft allotted for your Secure and Scalable Multicore Computing RFP. According to Microsoft, this RFP is created to “stimulate and enable bold, substantial analysis in multicore software that rethinks the relationships among computer architecture, operating systems, runtimes, compilers and applications.”
Microsoft, like many tech leaders, is investing substantial time and income of its own to try to help ease the transition to multicore/manycore computing with various parallel-processing advances. At this week;s Research Faculty Summit,
Windows 7 Pro Key, Microsoft;s Parallel Computing Platform team is set to present on some of this work, including the Parallel Extensions to the .Net Framework and Parallel Language Integrated Query (PLINQ). Representatives from the Microsoft-Intel Universal Parallel Computing Analysis Centers also are set to present their analysis agendas at the conference.
Where is Microsoft investing outside the Redmond walls on the multicore front? Here are the assignments that are being funded under the aforementioned multicore RFP:
Sensible Transactional Memory via Dynamic Public or Private Memory, Dan Grossman, University of Washington: “Integrating transactions into the design and implementation of modern programming languages is surprisingly difficult. The broad goal of this study is to remove such difficulties via work in language semantics,
Office 2010 Pro Plus Key, compilers,
Office 2007 Serial, runtime systems and performance evaluation.”
Supporting Scalable Multicore Systems Through Runtime Adaptation, Kim Hazelwood, University of Virginia: “The Paradox Compiler Project aims to develop the means to build scalable software that executes efficiently on multicore and manycore systems via a unique combination of static analyses and compiler-inserted hints and speculation, combined with dynamic, runtime adaptation. This research will focus on the Runtime Adaptation portion of the Paradox system.”
Language and Runtime Support for Secure and Scalable Programs, Antony Hosking, Jan Vitek, Suresh Jagannathan and Ananth Grama,
microsoft Office 2010 Activation, Purdue University: “Expressing and managing concurrency at each layer of the software stack,
Genuine Office 2007, with support across layers, as necessary, to reduce programmer effort in developing secure applications while ensuring scalable performance is a critical challenge. This staff will develop novel constructs that fundamentally enhance the performance and programmability of applications using transaction-based approaches.”
Geospatial-based Resource Modeling and Management in Multi- and Manycore Era, Tao Li, University of Florida: “To ensure that multicore performance will scale with the increasing number of cores, innovative processor architectures (e.g., distributed shared caches, on-chip networks) are increasingly being deployed in the hardware design. This staff will explore novel techniques for geospatial-based on-chip resource utilization analysis, management and optimization.”
Reliable and Efficient Concurrent Object-Oriented Programs (RECOOP), Bertrand Meyer, ETH Zurich, Switzerland: “The goal of this project, starting with the simple concurrent object-oriented programming (SCOOP) model of concurrent computation, is to develop a practical formal semantics and proof mechanism, enabling programmers to reason abstractly about concurrent programs and allowing proofs of formal properties of these programs.”
Runtime Packaging of Fine-Grained Parallelism and Locality, David Penry, Brigham Young University: “Scalable multicore environments will require the exploitation of fine-grained parallelism to achieve superior performance…. Current packaging algorithms suffer from a number of limitations. These researchers will develop new packaging algorithms that can take into account both parallelism and locality, are aware of critical sections, can be rerun as the runtime environment changes, can incorporate runtime feedback, and are highly scalable.”
Multicore-Optimal Divide-and-Conquer Programming, Paul Hudak, Yale University: “Divide and conquer is a natural, expressive and efficient model for specifying parallel algorithms. This staff cast divide and conquer as an algebraic functional form, called DC, much like the more popular map, reduce and scan functional forms. As such, DC subsumes the more popular forms, and its modularity permits application to a variety of problems and architectural details.”