Physicists often have very precise mathematical theories to describe physical systems but are limited in their abilty to describe the physical world because so few of these theories are amenable to analytic solution. Invariably physicists must resort to approximate solutions using mathematical calculations which can be conveniently performed in order to obtain useful results that both broaden and deepen our understanding of physics. This has spawned the field of Computational Physics, which is the study and implementation of numerical algorithms to solve problems in physics for which a quantitative theory already exists. These approaches are often very demanding in terms of processing power and/or memory requests of the computers used to solve the physics problems, and limit the extent to which these problems can be solved. Computational Physics therefore also encompasses the development of software and exploiting the features of computer hardware to solve these problems. Due to computational physics, capabilities such as weather forecasting are now possible that have a direct impact on people's daily lives.
As stated in our name, at Computational Physics, Inc. we delve into the computational aspects of physical problems, investigating techniques for the numerical solution of mathematical equations arising in all areas of physics. Although computer performance has increased dramatically over the last few decades, and continues to do so, physicists are still limited in their ability to solve problems by the algorithms implemented and available computer processing power. At CPI we investigate faster ways to perform the physics calculations by developing more intelligent algorithms or exploiting cutting edge hardware technology to accelerate the computations. One such capability that CPI has investigated is using Graphics Processing Units (GPUs) to accelerate radiative transfer calculations.
The advent of GPUs and the technical advances in their tremendous computational speed and very high memory bandwidth, has opened the possibility of increasing computational power for assimilation of satellite data in numerical weather prediction in a more cost effective way. Their combined features of general-purpose supercomputing, high parallelism, high memory bandwidth, low cost, and compact size now make GPU-based desktop computing an appealing alternative to massively parallel systems comprised of commodity central processing units (CPUs) (e.g., Beowulf clusters). However, exploiting this architecture for data assimilation requires executing fast radiative transfer (RT) models on GPUs, which is significantly more complicated to implement than simply porting code written for CPU-based architectures. There is therefore a need to develop and demonstrate the capability of much more rapidly executing fast RT calculations using GPU architecture to allow for affordable operational processing of high-volume hyperspectral data in data assimilation and the next generation of atmospheric soundings.
Under an internal research and development (IR&D) effort, a scene material classification tool called the Hyperspectral GPU Processor (HypGP) has been developed to assist CPI scientists in obtaining terrain and ocean scene material types for a number of CPI scene generation codes (e.g., ISIS, GAIA, OCEANUS). The HypGP tool performs atmospheric compensation using global climatology data bases generated by the MOSART code. Atmospheric transmittance, solar/lunar/thermal path radiance, direct solar/lunar terrain irradiance, and diffuse solar/lunar/thermal irradiances are estimated. A similar technique was developed for Landsat data and compares favorably with ground-truth data. Using a spectral angle mapper routine and the MOSART Terrain Material Library of over 180 materials, the terrain viewed by a hyperspectral instrument can be easily classified.
As an example, an AVIRIS scene of Moffett Field, California, which consists of 753 samples and 1924 lines (for a total of almost 1.45 million hyperspectral pixels) with 224 spectral channels, has been fully processed by HypGP using a subset of 81 terrain materials from the MOSART library (i.e., snow, ice, clouds, fabrics, and painted metal were not included). The HypGP code exists in two forms, namely, a straightforward implementation that uses Fortran 95 coding and processes the AVIRIS hypercube on a CPU and one that uses CUDA Fortran coding to process the AVIRIS hypercube on a 240-channel GPU. Considering the time required to process the hyperspectral data (i.e., ignoring the time required to ingest the hypercube and to output the terrain classification), the CUDA Fortran/GPU version of HypGP runs approximately 30 times faster than the Fortran95/CPU version on the same machine. For this use case, the summary information compared exactly, and the CPU-generated and GPU-generated scenes are both visually and numerically identical.