Data Quality Assurance and Statistical Analysis of High Throughput Screenings for Drug Discovery
- Authors: Yang Zhong, Zuojun Guo, Jianwei Che3
-
View Affiliations Hide Affiliations3 Genomics Institute of the Novartis Research Foundation, 10675 John Jay Hopkins Drive, San Diego, California 92121 , USA
- Source: Frontiers in Computational Chemistry: Volume 2 , pp 389-425
- Publication Date: February 2015
- Language: English
Data Quality Assurance and Statistical Analysis of High Throughput Screenings for Drug Discovery, Page 1 of 1
< Previous page | Next page > /docserver/preview/fulltext/9781608059782/chapter-9-1.gif
High throughput screening (HTS) is an important tool in modern drug discovery processes. Many recent, successful drugs can be traced back to HTS [1]. This platform has proliferated from pharmaceutical industry to national labs (e.g. NIH Molecular Libraries Screening Centers Network), and to academic institutions. Besides throughput improvements from thousand molecules in early times to multimillion molecules now, it has been adapted to increasingly sophisticated biological assays such as high content imaging. The vast amount of biological data from these screens presents a significant challenge for identifying interesting molecules in various biological processes. Due to the intrinsic noise of HTS and complex biological processes in most assays, HTS results need careful analysis to identify reliable hit molecules. Various data normalization and analysis algorithms have been developed by different groups over the years. In this chapter, we briefly describe some common issues encountered in HTS and related analysis.
-
From This Site
/content/books/9781608059782.chapter-9dcterms_subject,pub_keyword-contentType:Journal -contentType:Figure -contentType:Table -contentType:SupplementaryData105