Combinatorial Chemistry & High Throughput Screening - Volume 9, Issue 2, 2006
Volume 9, Issue 2, 2006
-
-
Editorial [ Efficient Medicinal Chemistry (Guest Editor: Donald J. Kyle)]
More LessSuccessful pharmaceutical companies are able to achieve a favorable financial balance between the upward pressure caused by increasing research costs and the downward pressure on the prices of commercial pharmaceuticals. These companies strive to conduct their research activities with an underlying strategic intent of constantly improving research efficiency as a main driver of attaining this critical balance. Lack of efficiency is likely to lead to a loss of competitive advantage, extended time to commercialization, and overall higher costs associated with the discovery and development of a new drug. Significant technological advances, capable of impacting research efficiency, have been made during the past decade, but accessing these technologies can require significant up-front and on-going financial, human resource, and infrastructure commitments. Achieving efficiency requires the careful and strategic implementation of these technologies, while appreciating that acquiring technologies alone will not guarantee success. The creative thinking that occurs in the laboratories of the medicinal chemists is at least of equal importance to the productivity-enhancing technologies that are also available in the laboratory. Consider the extreme example of knowing in advance exactly which molecule to synthesize to achieve the ideal balance of pharmacological potency, PKDM (pharmacokinetics and drug metabolism), safety, and bio-availability. This would mean that the medicinal chemist would need only to prepare a single molecule. If follows that this would require a single analytical profile and a single assay. The operational overhead associated with high-throughput synthesis, analysis, and screening would be unnecessary, and the highest research efficiency would be attained. Of course this is not possible because of the trial-anderror nature of the drug discovery process, but the concept is supportive of an argument that the goal of medicinal chemistry should be to synthesize as few molecules as possible in order to identify the highest quality, lowest risk, candidates for development. Working toward such a goal would represent movement toward a higher efficiency within discovery research. Chemistry as a discipline is somewhat unique in that the relevant scientific literature is vast, spanning over a century. In addition, chemistry is the one discipline within a multidisciplinary research team that becomes engaged at the earliest stages of the exploratory project and remains engaged throughout the commercial lifetime of the ultimate product. Access to prior, yet relevant, synthesis experience, simple yet effective data visualization, knowledge-based molecule design, and the ability to make reliable predictions of properties from chemical structure alone are all important ways of assisting a medicinal chemist in his movement toward more efficient experimentation. Although these types of tools are in their infancy it is clear that if they can be made into systems that the chemists will actually use and trust, then when combined with laboratory productivity-enhancing technologies there should be substantial and ongoing improvements in the efficiency of the research process. This special issue of CCHTS is dedicated to the presentation of various tools, systems, and processes that are aimed at assisting the medicinal chemist in the quest of working more efficiently, i.e. synthesizing as few compounds as possible to find the most "ideal" in a series.
-
-
-
The Integration of Process R&D in Drug Discovery - Challenges and Opportunities
More LessIn today's situation where a lot of attention is put on the whereabouts of the pharmaceutical industry, especially focusing on productivity, pricing policies, time lines, and competition, there is an increased need for a critical revision of work practices in the business. The prevailing prioritization of time-to-market is now more and more shifting over to also put quality, risk management, and effectiveness/efficiency in the limelight. Resources in terms of people and money will continue to be constrained and, therefore, best collaborative principles have to be adopted between different parts of the organization. Only by operating this way will we maximize the output. One of the most important key performance indicators in pharma R&D is the number of newly appointed candidate drugs (CDs). However, it is not only a matter of counting numbers but, more so, to nominate compounds with the best properties and likelihood to survive. In that vein the demands on Process R&D have gone up considerably over recent years and there is now a pronounced need to make forecasts on cost of goods for the API (active pharmaceutical ingredient), scalability issues, IP matters, route design etc. On top of this, there is as always an expectation that the supply of material needed to conduct the various studies is timely, fully reliable, and flexible, even if volumes and delivery dates fluctuate widely. To successfully be able to cope with this challenging and sometimes stressful situation a back-integration into earlier parts of Drug Discovery is a must and, hence, connecting to new projects will have to be initiated already during the LO-stage (lead optimization). The consequences of this and its further implications will constitute the core part of the paper.
-
-
-
Achieving Maximum ROI from Corporate Databases: Exploiting Your Databases with Integrated Querying for Better Decision-Making
Authors: L. F. Jardine, A. O. Krassavine, A. W.R. Payne and S. PorterIn order to increase the rate of drug discovery, pharmaceutical and biotechnology companies spend billions of dollars a year assembling research databases. Current trends still indicate a falling rate in the discovery of New Molecular Entities (NMEs). It is widely accepted that the data need to be integrated in order for it to add value. The degree to which this must be achieved is often misunderstood. The true goal of data integration must be to provide accessible knowledge. If knowledge cannot be gained from these data, then it will invalidate the business case for gathering it. Current data integration solutions focus on the initial task of integrating the actual data and to some extent, also address the need to allow users to access integrated information. Typically the search tools that are provided are either restrictive forms or free text based. While useful, neither of these solutions is suitable for providing full coverage of large numbers of integrated structured data sources. One solution to this accessibility problem is to present the integrated data in a collated manner that allows users to browse and explore it and also perform complex ad-hoc searches on it within a scientific context and without the need for advanced Information Technology (IT) skills. Additionally, the solution should be maintainable by 'in-house' administrators rather than requiring expensive consultancy. This paper examines the background to this problem, investigates the requirements for effective exploitation of corporate data and presents a novel effective solution.
-
-
-
Application and Utilization of Chemoinformatics Tools in Lead Generation and Optimization
Authors: N. Fotouhi, P. Gillespie, R. A. Goodnow, S.- S. So, Y. Han and L. E. BabissThe process of Drug Discovery is a complex and high risk endeavor that requires focused attention on experimental hypotheses, the application of diverse sets of technologies and data to facilitate high quality decisionmaking. All is aimed at enhancing the quality of the chemical development candidate(s) through clinical evaluation and into the market. In support of the lead generation and optimization phases of this endeavor, high throughput technologies such as combinatorial/high throughput synthesis and high throughput and ultra-high throughput screening, have allowed the rapid analysis and generation of large number of compounds and data. Today, for every analog synthesized 100 or more data points can be collected and captured in various centralized databases. The analysis of thousands of compounds can very quickly become a daunting task. In this article we present the process we have developed for both analyzing and prioritizing large sets of data starting from diversity and focused uHTS in support of lead generation and secondary screens supporting lead optimization. We will describe how we use informatics and computational chemistry to focus our efforts on asking relevant questions about the desired attributes of a specific library, and subsequently in guiding the generation of more information-rich sets of analogs in support of both processes.
-
-
-
Improving Synthetic Efficiency Using the Computational Prediction of Biological Activity
Authors: K. C. Brogle, T. Gund and D. J. KyleA process has been developed whereby libraries of compounds for lead optimization can be synthesized and screened with greater efficiency using computational tools. In this method, analogues of a lead chemical structure are considered in the form of a virtual library. Less than 1/3 of the library is selected as a training set by clustering the compounds and choosing the centroid of each cluster. This training set is then used to generate a model using PLS regression upon the experimental values from that assay using 1D/2D descriptors. The model is applied to the remaining compounds (the test set) for which assay values are predicted and a rank ordering established. An example of this was a set of 169 PDE4 inhibitors. A predictive model was achieved using a training set of 52 compounds. When applied to the remaining 117 compounds this model allowed a rank ordering of these compounds for synthesis and testing. Selecting the top 33 compounds of the test set gives 78% of the compounds with the desired activity (hits) by synthesizing only 50% of the library, including the training set. Selecting the top 59 of the test set gives 97% of the hits from only 67% of the library. This process succeeds by avoiding two principal weaknesses of 2D descriptors: lack of interpretation and lack of extrapolation. Two principal assumptions of QSAR are shown to be unnecessary; removing descriptor redundancy does not improve fit and a predictive r2 greater than 0.5 is not necessary if rank-ordering is desired.
-
-
-
Comparison of Methods for Sequential Screening of Large Compound Sets
Authors: Paul E. Blower, Kevin P. Cross, Gabriel S. Eichler, Glenn J. Myatt, John N. Weinstein and Chihae YangSequential screening is an iterative procedure that can greatly increase hit rates over random screening or noniterative procedures. We studied the effects of three factors on enrichment rates: the method used to rank compounds, the molecular descriptor set and the selection of initial training set. The primary factor influencing recovery rates was the method of selecting the initial training set. Rates for recovering active compounds were substantially lower with the diverse training sets than they were with training sets selected by other methods. Because structure-activity information is incrementally enhanced in intermediate training sets, sequential screening provides significant improvement in the average rate of recovery of active compounds when compared with non-iterative selection procedures.
-
-
-
A Collaborative Hit-to-Lead Investigation Leveraging Medicinal Chemistry Expertise with High Throughput Library Design, Synthesis and Purification Capabilities
Authors: X. Yang, D. Parker, L. Whitehead, N. S. Ryder, B. Weidmann, M. Stabile-Harris, D. Kizer, M. McKinnon, A. Smellie and D. PowersHigh throughput screening (HTS) campaigns, where laboratory automation is used to expose biological targets to large numbers of materials from corporate compound collections, have become commonplace within the lead generation phase of pharmaceutical discovery [1]. Advances in genomics and related fields have afforded a wealth of targets such that screening facilities at larger organizations routinely execute over 100 hit-finding campaigns per year [2]. Often, 105 or 106 molecules will be tested within a campaign/cycle to locate a large number of actives requiring follow-up investigation. Due to resource constraints at every organization, traditional chemistry methods for validating hits and developing structure activity relationships (SAR) become untenable when challenged with hundreds of hits in multiple chemical families per target. To compound the issue, comparison and prioritization of hits versus multiple screens, or physical chemical property criteria, is made more complex by the informatics issues associated with handling large data sets. This article describes a collaborative research project designed to simultaneously leverage the medicinal chemistry and drug development expertise of the Novartis Institutes for Biomedical Research Inc. (NIBRI) and ArQule Inc.'s high throughput library design, synthesis and purification capabilities. The work processes developed by the team to efficiently design, prepare, purify, assess and prioritize multiple chemical classes that were identified during high throughput screening, cheminformatics and molecular modeling activities will be detailed.
-
-
-
Interactive Tools for Risk Reduction and Efficiency Improvements in Medicinal Chemistry
Authors: Kevin C. Brogle, Cindy Lin and Paul R. BlakeThere are many decisions and risks associated with the design and development of new pharmaceutical agents. To help improve decision-making, and reduce the associated risks - prior to synthesis, we have developed interactive webbrowser tools for: (i) tracking, searching, clustering and categorizing (by reactive moieties) chemical reactants, (ii) interactively assessing risks, either synthetic - based on prior experience, absorption following oral administration - based on rules of 5, or diversity, and (iii) a complete architecture for enumerating, analyzing, submitting and plating large combinatorial or small biased libraries. We believe the implementation of this highly interactive system has given our scientists a competitive advantage by maintaining their focus on the lowest risk, highest quality molecules throughout the research process.
-
-
-
Functional Characterisation of Homomeric Ionotropic Glutamate Receptors GluR1-GluR6 in a Fluorescence-Based High Throughput Screening Assay
Authors: Mette Strange, Hans Brauner-Osborne and Anders A. JensenWe have constructed stable HEK293 cell lines expressing the rat ionotropic glutamate receptor subtypes GluR1i, GluR2Qi, GluR3i, GluR4i, GluR5Q and GluR6Q and characterised the pharmacological profiles of the six homomeric receptors in a fluorescence-based high throughput screening assay using Fluo-4/AM as a fluorescent Ca2+ indicator. In this assay, the pharmacological properties of nine standard GluR ligands correlated nicely with those previously observed in electrophysiology studies of GluRs expressed in Xenopus oocytes or mammalian cells. The potencies and efficacies displayed by the agonists (S)-glutamate, (S)-quisqualate, kainate, (RS)-AMPA, (RS)-ATPA, (RS)-ACPA] and (S)-4-AHCP at the six GluRs were in concordance with electrophysiological studies. Furthermore, the Ki values exhibited by the competitive antagonists NBQX and (RS)-ATPO were also in agreement with findings of previous studies. Finally, the effects of various concentrations of Ca2+ in the assay buffer and of the allosteric modulators cyclothiazide and concanavalin A on GluR signalling were examined. This study represents the most elaborate functional characterisation of multiple AMPA and KA receptor subtypes in the same assay reported to date. We propose that high throughput screening of compound libraries at the six GluRHEK293 cell lines could be helpful in the search for structurally and pharmacologically novel ligands acting at the receptors.
-
Volumes & issues
-
Volume 28 (2025)
-
Volume 27 (2024)
-
Volume 26 (2023)
-
Volume 25 (2022)
-
Volume 24 (2021)
-
Volume 23 (2020)
-
Volume 22 (2019)
-
Volume 21 (2018)
-
Volume 20 (2017)
-
Volume 19 (2016)
-
Volume 18 (2015)
-
Volume 17 (2014)
-
Volume 16 (2013)
-
Volume 15 (2012)
-
Volume 14 (2011)
-
Volume 13 (2010)
-
Volume 12 (2009)
-
Volume 11 (2008)
-
Volume 10 (2007)
-
Volume 9 (2006)
-
Volume 8 (2005)
-
Volume 7 (2004)
-
Volume 6 (2003)
-
Volume 5 (2002)
-
Volume 4 (2001)
-
Volume 3 (2000)
Most Read This Month

Most Cited Most Cited RSS feed
-
-
Label-Free Detection of Biomolecular Interactions Using BioLayer Interferometry for Kinetic Characterization
Authors: Joy Concepcion, Krista Witte, Charles Wartchow, Sae Choo, Danfeng Yao, Henrik Persson, Jing Wei, Pu Li, Bettina Heidecker, Weilei Ma, Ram Varma, Lian-She Zhao, Donald Perillat, Greg Carricato, Michael Recknor, Kevin Du, Huddee Ho, Tim Ellis, Juan Gamez, Michael Howes, Janette Phi-Wilson, Scott Lockard, Robert Zuk and Hong Tan
-
-
- More Less