prev next front |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 |16 |17 |18 |19 |20 |21 |22 |23 |24 |25 |review
To echo the remark by Faustman and Omenn (1996), recent improvements in health risk assessment (RA) have been made possible largely due to the recent advances in other disciplines such as analytical methods, toxicology, epidemiology, exposure assessment, molecular biology, and modeling of adverse responses. Hazard identification and dose-response assessment are two of the four key steps in RA first formulated by the National Research Council (1983). These two steps are now subsumed under a single step or subprocess referred to by the National Research Council (1994) as toxicity assessment. This new RA scheme makes more sense in that dose-response assessment always starts with hazard identification.

Many in vitro and short-term tests, like the widely used Salmonella mutation assay (Ames test), are found to be useful in identifying chemical mutagens and hence possibly chemical carcinogens. Modern analytical instruments begin to prove that a substance once at levels not detectable may no longer be simply treated as nonexistent. Various mathematical models have been developed to extrapolate low-dose responses that cannot be measured without an impractical large number of test subjects. These research developments have now made health program agenda that require zero risk or observations beyond actual effects more realistic. According to the National Research Council (1987), the use of PB-PK modeling (Slides 16 and 17) now receives greater regulatory attention because of the several relatively successful attempts with this simulation technique in predicting tissue dose in humans and other species.