Friday 22 August 2014

Human Studies And Human Cancer

Human studies

Dr. J. M. Folkman presented an excellent example how, based on animal observations, one could predict and proceed to similar principles in human cancer. The angiogenesis inhibitors, angiostatin and endostatin, were discovered in an animal model in which a primary tumor suppresses its metastases. A different strategy was developed to discover endogenous inhibitors of angiogenesis generated by human tumors. An equal number of tumor cells was infected into each flank of a mouse and if one tumor grew and suppressed the opposite tumor, he searched for an inhibitor of angiogenesis in the circulation and in conditional medium from cultures of the tumor cells. A cleaved fragment of antithrombin 3 was generated by a human small cell lung cancer and by a human pancreatic cancer. This fragment is a specific endothelial inhibitor and a potent angiogenesis inhibitor, and does not bind thrombin.
When this finding is taken together with angiostatin, an internal fragment of plasminogen, it suggests a molecular linkage between the homeostatic system and regulators of angiogenesis.
Dr. W. G. Kaelin pointed out that the human von Hippel-Lindau (vHL) tumor suppressor protein may have an effect on the increasingly important proteasome degradation machinery. Inactivation of the vHL tumor suppressor gene in humans gives rise to a hereditary cancer syndrome characterized by central nervous system and retinal hemangioblastomas. The vHL protein (pvHL) plays an important role in the inhibition of hypoxia-inducible genes under well-oxygenated conditions. This activity has been linked to the ability of pvHL, once bound to elongin C and cul2, to target HIF1α and HIF2α for degradation.
The acquisition of cell immortality is an important step in tumor progression often conferred by the expression of the telomerase enzyme. Dr. R. A. Weinberg showed that the inactivation of the enzyme through use of a dominant negative telomerase results in crisis and death of telomerase-positive human tumor cells. Introduction of the telomerase gene together with the SV40 large T and ras oncogenes results in the transformation to tumorigenicity of normal human cells, resulting in turn in the creation of human tumor cells of defined genetic constitution.
A glimpse of a final outlook came from Dr. A. J. Levine, who showed that new technologies could widen insight into the natural history of human cancer without, however, abolishing the need for the experimental model, which will provide conceptual progress for the biology of cancer. He examined human colon cancers and matched normal colon tissue for their transcription profile using Affymetrix DNA chips. A cluster analysis of the patterns of gene expression was able to separate cancer tissue, normal tissue and colon cancer cell lines. Genes that were coordinately regulated (e.g., genes for ribosomal proteins) clustered and were different in their expression pattern in normal and cancer tissue.

Does one always have to go and can one always go via the animal model to insights on the natural history of human cancer?

Dr. E. A. Sausville's introduction not only set the tone for the meeting but raised the key question about the relevance of animal models both for understanding human cancer and for drug development. The empirical model of cancer drug discovery and development is based on the antiproliferative activity of agents in murine xenografts of human tumor cells. Retrospective comparison of activity in Phase II clinical trials with preclinical activity leads to the conclusion that there is a poor correlation in the histology with a drug's activity in preclinical models and subsequent clinical performance in the same histology. If anything, there is a correlation in the number of different models and the magnitude of response with subsequent demonstration of activity in some human tumor. These results encourage efforts to define drug leads by their performance in rationally and molecularly defined models. Development and optimization occurs in relation to continued ability of the lead structure to affect the molecular target. In vivo models would serve this goal by allowing a clear-cut read-out of the drug's effect on its molecular target in the milieu of an intact animal host.
Dr. B. A. Chabner pointed out that often animal models are difficult to come by when one wants to evaluate details in a drug that happened to be effective in humans. He presented the story of the identification, preclinical evaluation and early clinical evaluation of a new cytotoxic molecule (ETF 43), a DNA minor groove-binding molecule that has shown activity in human sarcomas, melanoma and mesotheliomas. Cell line screening detected antitumor activity accurately, but no validated mouse models exist for predicting activity in humans. The genetic diversity of human tumors, the complexity of genetic changes in human tumors and the diversity of host handling of drugs all complicate the development of predictive models. The speaker suggested that human leukemias, with specific characteristic translocations, might be easier to model than human epithelial tumors with highly complex sequential genetic mutations.

Animal Model And Model Studies For Our Knowledge

Animal models and model studies

Good animal models for prostate cancer were for a long time absent. Recently, some progress has been made and 3 examples for the dynamics in this field were provided. Probably the best such example was documented by Dr. D. Waters with spontaneous tumors of pet dogs, which share striking similarities to their human counterparts with respect to epidemiology, histologic spectrum and metastatic behavior. Canine osteosarcoma also provides a particularly useful model system to test novel therapies directed against minimal residual disease. Studies are being designed to elucidate the association between organismal aging, oxidative stress and cancer development in dogs. Priorities for future research will be to characterize the molecular biology of particular tumors and to determine their suitability to evaluate targeted therapies.
Then, Dr. N. Schreiber-Agus reminded the study group that prostate adenocarcinoma remains one of the more elusive cancer types, in part due to a lack of understanding of the normal and diseased human prostate on the molecular and cellular levels. Several complementary animal models of prostate cancer, including the genetically manipulated mouse, have emerged as a means to confront this elusiveness. Newly characterized reagents and strategies may allow us to overcome the limitations of existing mouse models and to develop models that will illuminate the pathogenetic basis of prostate cancer initiation and progression.
Finally, Dr. W. R. Sellers brought an example for a particular pathway possibly deranged in prostate tumor. The PTEN protein acts to regulate cell-cycle progression and cell survival. These functions require PTEN to act as a lipid phosphatase and in so doing, antagonize P13K signaling. Downstream targets of P13K/PTEN, such as AKH are activated in PTEN−/− tumors. Mouse models of prostate cancer based on deregulation of this pathway are in progress.
Dr. T. L. Benjamin presented the case of Polyoma virus, which perturbs multiple signaling pathways that impinge on cell growth and can induce a broad range of solid tumors in mice. Inactivation of individual pathways in the virus may alter the histological patterns, sizes and frequencies of tumors, depending on the particular pathway and target tissue. The host genetic background can also influence the patterns and overall susceptibility to polyoma-induced tumors, thus presenting another example of the relevance of the host background not only for animal models for human tumors but also for the outcome of carcinogens on animal targets in general.
Dr. A. Berns used conditional tumor suppressor gene knockout mice to produce, in a tissue-specific fashion, specific tumors in mice. This method permitted him to perform a detailed genotype-phenotype analysis making it possible to correlate distinct genetic lesions with specific tumor characteristics. In an initial series of experiments, the loss of retinoblastoma gene (Rb) in combination with p107 and p53 was studied. Inactivation was directed to various cell types such as photo receptor cells, the pineal gland, choroid plexus astrocytes and the intermediate lobe of the pituitary gland. A range of tumors were found, including choroid plexus tumors, pineal gland tumors, pituitary tumors and medullablastomas. Interestingly, some but not other tumors required loss of p53, as measured by loss of heterozygosity (LOH), indicating a cell type-specific need for specific gene mutations. Such studies illustrate the potential of these mice to dissect tumorigenic processes. It is expected that they will also be valuable for testing therapeutic intervention protocols.
Dr. N. E. Hynes discussed the question whether a single genetically modified organ could be implanted ectopically into an animal and thus used to test natural effectors or drugs. The mammary gland is unique in that a major part of its development takes place after birth. This allowed other researchers to show, in 1959, that the entire developmental program of the mammary gland could be recapitulated following transplantation of mammary tissue fragments or cells into the cleared fat pad of syngeneic hosts. Thus, an interesting possibility for developing models of breast cancer in the orthotopic site is to introduce primary mammary cells that have been genetically manipulated to express specific transforming proteins into fat pads cleared of host tissue. Overexpression of RTK c-Met (and/or its ligand HGF) has been found in many breast tumors. Primary mammary epithelial cells ectopically expressing HGF rapidly form mammary tumors after transplantation into cleared fat pads. This model will be used to test the effects of specific inhibitors.
Dr. T. A. Van Dyke brought up the point that there appear to be cell/tissue-specific tumor suppression mechanisms for p53. In normally nondividing brain epithelial cells (chorioid plexus) p53 inactivation has no effect on an otherwise wild-type background. Upon inactivation of the Rb family proteins, however, p53-dependent apoptosis suppresses tumor growth and progression. In thymocytes, p53 inactivation predisposes to tumorigenesis, an activity that is not dependent on VDJ recombination. In contrast, VDJ recombination is required for thymic lymphoma induced by an ATM deficiency. Thus p53 and ATM suppress thymic lymphoma by distinct mechanisms.
Dr. M. E. Ewen addressed the relationship between retinoblastoma protein (pRb) and Ras. Rb-deficient mice display elevated levels (up to 30-fold) of active, GTP-bound Ras, suggesting that Ras is a downstream target of the pRb. The influence of pRb on Ras is linked to the ability of pRb to regulate differentiation. Together our previous work, his data suggest that cytoplasmic-nuclear signaling between pRb and Ras is bidirectional.
P300 and CBP constitute a closely related family of nuclear, signal integrating molecules, both of which are targets of DNA tumor viral oncoproteins. Dr. D. A. Livingston's group produced mice heterozygous for a null allele of p300 and other mice heterozygous for CBP were compared. The mice heterozygous for CBP developed several forms of hematological malignancies with LOH and CBP. The mice heterozygous for a null allele of p300 remained healthy for ≥ 20 months. Therefore, in at least one strain of mice, CBP is a tumor suppressor, and p300 is not.
Drs. R. A. DePinho reported about his work with Dr. L. Chin, in which they used regulated expression of a gene to obtain information about its function, a highly successful approach getting more attention as already shown by Dr. A. Berns. Melanocyte-specific expression of oncogenic RAS coupled with INK4a deficiency generates malignant melanoma in mice, establishing a causal role in melanoma-genesis. To determine whether initiating genetic lesions, i.e., oncogenic RAS, are still required for maintenance, the tet system was exploited to turn off RAS in established tumors. Down-regulation of RAS resulted in complete regression of primary tumors, a process highlighted initially by massive endothelial cell death. This inducible melanoma model establishes that genetic lesions remain relevant to tumor maintenance and indicates roles for tumor-associated changes in directing host support mechanisms.
Dr. S. J. Korsmeyer brought attention to apoptosis, which has been investigated not only at the murine, i.e., mammalian level, but where seed discoveries were made at the invertebrate level as well. Many BCL-2 family members exist in active and inactive conformations determined by post-translational modifications in response to proximal death and survival signals. The pro-apoptotic molecule, BAD, a “BH3 domain” only molecule is inactivated by phosphorylation of serine 112 by mitochondrial tethered PKA or serine 136 by the PI-3 kinase pathway. BID as an inactive p22 molecule resides in the cytosol and is activated by a caspase−8 cleavage following tumor necrosis factor-α/Fas engagement resulting in the translocation of truncated BID to mitochondria. The three-dimensional structure of BID suggests that BCL-2 family members can be subgrouped into constitutively inactive molecules with hidden BH3 domains and active ones with an exposed hydrophobic face.
Dr. C. M. Croce provided a classical example where new observations made at the human level will require mouse experimentation (e.g.,k/o) to solidify assumptions or to obtain deeper insights into mechanisms. He has identified a gene at 3p he named FHIT, which is mutated by deletions in a very high percentage of some of the most common human cancers, including lung cancer. The gene encodes a Ap3 A hydrolase that cleaves Ap3 A into AMP + ADP. Transfection of the human FHIT gene into human tumor-derived cell lines results in suppression of tumorigenicity. A study of human lung cancer for expression of the FHIT protein indicates that almost 100% of small cell carcinomas and 73% of non-small cell carcinomas (HNSCC) have lost the ability to express the FHIT protein. Interestingly, 85% of bronchial dysplastic lesions are FHIT negative, suggesting that loss of FHIT function is a very early event in lung carcinogenesis. The mouse FHIT gene was knocked out in mouse embryonal stem cells and FHIT−/+ and FHIT− /− mice were obtained. FHIT− /− mice develop stomach cancers at 9 months. FHIT+/− mice can be induced to develop stomach cancers and rhabdomyosarcomas after treatment with low doses of carcinogens.

Technology And Internet BOOM In Information Technology

Technology and market structure

A major focus of this monograph is the relationship between technology and market structure. High-technology industries are subject to the same market forces as every other industry. However, there are some forces that are particularly important in high-tech, and it is these forces that will be our primary concern. These forces are not ``new.'' Indeed, the forces at work in network industries in 1990s are very similar to those that confronted the telephone and wireless industries in the 1890s.
But forces that were relatively minor in the industrial economy turn out to be critical in the information economy. Second-order effects for industrial goods are often first-order effects for information goods.


nasdaq-sp.png
Figure 1: Return on the NASDAQ and S&P 500 during the 1990s.
Take, for example, cost structures. Constant fixed costs and zero marginal costs are common assumptions for textbook analysis, but are rarely observed for physical products since there are capacity constraints in nearly every production process. But for information goods, this sort of cost structure is very common-indeed it is the baseline case. This is true not just for pure information goods, but even for physical goods like chips. A chip fabrication plant can cost several billion dollars to construct and outfit; but producing an incremental chip only costs a few dollars. It is rare to find cost structures this extreme outside of technology and information industries.
The effects I will discuss involve pricing, switching costs, scale economies, transactions costs, system coordination, and contracting. Each of these topics has been extensively studied in the economics literature. I do not pretend to offer a complete survey of the relevant literature, but will focus on relatively recent material in order to present a snapshot of the state of the art of research in these areas.
I try to refer to particularly significant contributions and other more comprehensive surveys. The intent is to provide an overview of the issues for an economically literate, but non-specialist, audience.
For a step up in technical complexity, I can recommend the survey of network industries in the Journal of Economic Literature consisting of articles by Katz and Shapiro [1994]Besen and Farrell [1994]Leibowitz and Margolis [1990], and the books by Shy [2001] and Vulkan [2003]Farrell and Klemperer [2003] contains a detailed survey of work involving switching costs and network effects with an extensive bibliography.
For a step down in technical complexity, but with much more emphasis on business strategy, I can recommend Shapiro and Varian [1998a], which contains many real-world examples.

  Intellectual property

There is one major omission from this survey, and that is the role of intellectual property.
When speaking of information and technology used to manipulate information, intellectual property is a critical concern. Copyright law defines the property rights of the product being sold. Patent law defines the conditions that affect the incentives for, and constraints on, innovation in physical devices and, increasingly in software and business processes.
My excuse for the omission of intellectual property from this survey is that this topic is ably covered by my coauthor, David [2002]. In addition to this piece, I can refer the reader to the surveys by Gallini and Scotchmer [2001]Gallini [2002] and Menell [2000], and the reviews by Shapiro [2000],Shapiro [2001]Samuelson and Varian [2002] describe some recent developments in intellectual property policy.

  The Internet boom

First we must confront the question of what happened during the late 1990s. Viewed from 2003, such an exercise is undoubtedly premature, and must be regarded as somewhat speculative. No doubt a clearer view will emerge as we gain more perspective on the period. But at least I will offer one approach to understanding what went on.
I interpret the Internet boom of the late 1990s as an instance of what one might call ``combinatorial innovation.''
Every now and then a technology, or set of technologies, comes along that offers a rich set of components that can be combined and recombined to create new products. The arrival of these components then sets off a technology boom as innovators work through the possibilities.
This is, of course, an old idea in economic history. Schumpeter [1934], p. 66 refers to ``new combinations of productive means.'' More recently, Weitzman [1998] used the term ``recombinant growth.'' Gilfillan [1935]Usher [1954],Kauffman [1995] and many others describe variations on essentially the same idea.
The attempts to develop interchangeable parts during the early nineteenth century is a good example of a technology revolution driven by combinatorial innovation.3 The standardization of design (at least in principle) of gears, pullies, chains, cams, and other mechanical devices led to the development of the so-called ``American system of manufacture'' which started in the weapons manufacturing plants of New England but eventually led to a thriving industry in domestic appliances.
A century later the development of the gasoline engine led to another wave of combinatorial innovation as it was incorporated into a variety of devices from motorcycles to automobiles to airplanes.
As Schumpeter points out in several of his writings (e.g., Shumpeter [2000]), combinatorial innovation is one of the important reasons why inventions appear in waves, or ``clusters,'' as he calls them.

... as soon as the various kinds of social resistance to something that is fundamentally new and untried have been overcome, it is much easier not only to do the same thing again but also to do similar things in different directions, so that a first success will always produce a cluster. (p 142)
Schumpeter emphasizes a ``demand-side'' explanation of cluster of innovation; one might also consider a complementary ``supply-side'' explanation: since innovators are, in many cases, working with the same components, it is not surprising to see simultaneous innovation, with several innovators coming up with essentially the same invention at almost the same time. There are many well-known examples, including the electric light, the airplane, the automobile, and the telephone.
A third explanation for waves of innovation involves the development of complements. When automobiles were first being sold, where did the paved roads and gasoline engines come from? The answer: the roads were initially the result of the prior decade's bicycle boom, and gasoline was often available at the general store to fuel stationary engines used on farms. These complementary products (and others, such as pneumatic tires) were enough to get the nascent technology going; and once the growth in the automobile industry took off it stimulated further demand for roads, gasoline, oil, and other complementary products. This is an example of an ``indirect network effect,'' which I will examine further in section 10.
The steam engine and the electrical engine also ignited rapid periods of combinatorial innovation. In the middle of the twentieth century, the integrated circuit had a huge impact on the electronics industry. Moore's law has driven the development of ever-more-powerful microelectronic devices, revolutionizing both the communications and the computer industry.
The routers that laid the groundwork for the Internet, the servers that dished up information, and the computers that individuals used to access this information were all enabled by the microprocessor.
But all of these technological revolutions took years, or even decades to work themselves out. As Hounshell [1984] documents, interchangeable parts took over a century to become truly reliable. Gasoline engines took decades to develop. The microelectronics industry took 30 years to reach its current position.
But the Internet revolution took only a few years. Why was it so rapid compared to the others? One hypothesis is that the Internet revolution was minor compared to the great technological developments of the past. (See, for example,Gordon [2000].) This may yet prove to be true-it's hard to tell at this point.
But another explanation is that the component parts of the Internet revolution were quite different from the mechanical or electrical devices that drove previous periods of combinatorial growth.
The components of the Internet revolution were not physical devices as all. Instead they were ``just bits.'' They were ideas, standards specifications, protocols, programming languages, and software.
For such immaterial components there were no delays to manufacturer, or shipping costs, or inventory problems,. Unlike gears and pulleys, you can never run out of HTML! A new piece of software could be sent around the world in seconds and innovators everywhere could combine and recombine this software with other components to create a host of new applications.
Web pages, chat rooms, clickable images, web mail, MP3 files, online auctions and exchanges ... the list goes on and on. The important point is that all of these applications were developed from a few basic tools and protocols. They are the result of the combinatorial innovation set off by the Internet, just as the sewing machine was a result of the combinatorial innovation set off by the push for interchangeable parts in the late eighteenth century munitions industry.
Given the lack of physical constraints, it is no wonder that the Internet boom proceeded so rapidly. Indeed, it continues today. As better and more powerful tools have been developed, the pace of innovation have even sped up in some areas, since a broader and broader segment of the population has been able to create online applications easily and quickly.
Twenty years ago the thought that a loosely coupled community of programmers, with no centralized direction or authority, would be able to develop an entire operating system, would have been rejected out of hand. The idea would have been just too absurd. But it has happened: GNU/Linux was not only created online, but has even become respectable and raised a serious threat to very powerful incumbents..
Open source is software is like the primordial soup for combinatorial innovation. All the components are floating around in the broth, bumping up against each other and creating new molecular forms, which themselves become components for future development.
Unlike closed-source software, open source allows programmers and ``wannabe programmers'' to look inside the black box to see how the applications are assembled. This is a tremendous spur to education and innovation.
It has always been so. Look at Josephson [1959]'s description of the methods of Thomas Edison:

``As he worked constantly over such machines, certain original insights came to him; by dint of may trials, materials long known to others, constructions long accepted were put together in a different way-and there you had an invention.'' (p. 91)
Open source makes the inner workings of software apparent, allowing future Edisions to build on, improve, and use existing programs-combining them to create something that may be quite new.
One force that undoubtedly led to the very rapid dissemination of the web was the fact that HTML was, by construction, open source. All the early web browsers had a button for ``view source,'' which meant that many innovations in design or functionality could immediately be adopted by imitators-and innovators-around the globe.
Perl, Python, Ruby, and other interpreted languages have the same characteristic. There is no ``binary code'' to hide the design of the original author. This allows subsequent users to add on to programs and systems, improving them and making them more powerful.

  Financial speculation

Each of the periods of combinatorial innovation referred to in the previous section was accompanied by financial speculation. New technologies that capture the public imagination inevitably lead to an investment boom: Sewing machines, the telegraph, the railroad, the automobile ... the list could be extended indefinitely.
Perhaps the period that bears the most resemblance to the Internet boom is the so-called ``Euphoria of 1923,'' when it was just becoming apparent that broadcast radio could be the next big thing.
The challenge with broadcast radio, as with the Internet, was how to make money from it. Wireless World, a hobbyist magazine, even sponsored a contest to determine the best business model for radio. The winner was ``a tax on vacuum tubes'' with radio commercials being one of the more unpopular choices.4
Broadcast radio, of course, set off its own stock market bubble. When the public gets excited about a new technology, a lot of ``dumb money'' comes into the stock market. Bubbles are a common outcome. It may be true that it's hard to start a bubble with rational investors-but not it's not that hard with real people.
Though billions of dollars were lost during the Internet bubble, a substantial fraction of the investment made during this period still has social value. Much has been made of the miles laid of ``dark fiber.'' But it's just as cheap to lay 128 strands of fiber as a single strand, and the marginal cost of the ``excess'' investment was likely rather low.
The biggest capital investment during the bubble years was probably in human capital. The rush for financial success led to a whole generation of young adults immersing themselves in technology. Just as it was important for teenagers to know about radio during the 1920s and automobiles in the 1950s, it was important to know about computers during the 1990s. ``Being digital'' (whatever that meant) was clearly cool in the 1990s, just as ``being mechanical'' was cool in the 1940s and 1950s.
This knowledge of, and facility with, computers will have large payoffs in the future. It may well be that part of the surge in productivity observed in the late 1990s came from the human capital invested in facility with spreadsheets and web pages, rather than the physical capital represented by PCs and routers. Since the hardware, the software, and the wetware-the human capital-are inexorably linked, it is almost impossible to subject this hypothesis to an econometric test.

  Where are we now?

As we have seen, the confluence of Moore's Law, the Internet, digital awareness, and the financial markets led to a period of rapid innovation. The result was excess capacity in virtually every dimension: compute cycles, bandwidth, and even HTML programmers. Al of these things are still valuable-they're just not the source of profit that investors once thought, or hoped, that they would be.
We are now in a period of consolidation. These assets have been, and will continue to be marked to market, to better reflect their true asset value-their potential for future earnings. This process is painful, to be sure, but not that different in principle from what happened to the automobile market or the radio market in the 1930s. We still drive automobiles and listen to the radio, and it is likely that the web-or its successor-will continue to be used in the decades to come.
The challenge now is to understand how to use the capital investment of the 1990s to improve the way that goods and services are produced. Productivity growth has accelerated during the latter part of the 1990s, and, uncharacteristically, continued to grow during the subsequent slump. Is this due to the the use of information technology? Undoubtedly it played a role, though there will continue to be debates about just how important it has been.
Now we are in the quiet phase of combinatorial innovation: the components have been perfected, the initial inventions have been made, but they have not yet been fully incorporated into organizational work practices.
David [1990] has described how the productivity benefits from the electric motor took decades to reach fruition. The real breakthrough came from miniaturization and the possibility of rearranging the production process. Henry Ford, and the entire managerial team, were down on the factor floor every day fine tuning the flow of parts through the assembly line as they perfected the process of mass production.
The challenge facing us now is to re-engineer the flow of information through the enterprise. And not only within the enterprise-the entire value chain is up for grabs. Michael Dell has shown us how direct, digital communication with the end user can be fed into production planning so as to perfect the process of ``mass customization.''
True, the PC is particularly susceptible to this form of organization, given that it is constructed from a relatively small set of standardized components. But Dell's example has already stimulated innovators in a variety of other industries. There are many other examples of of innovative production enabled by information technology that will arise in the future.

The ``New Economy''

There are those that claim that we need a new economics to understand the new economy of bits. I am skeptical. The old economics-or at least the old principles-work remarkably well. Many of the effects that drive the new information economy were there in the old industrial economy-you just have to know where to look.
Effects that were uncommon in the industrial economy-like network effects, switching costs, and the like-are the norm in the information economy. Recent literature that aims to understand the economics of information technology is firmly grounded in the traditional literature. As with technology itself, the innovation comes not in the basic building blocks, the components of economic analysis, but rather the ways in which they are combined.
Let us turn now to this task of describing these ``combinatorial innovations'' in economic thinking.

India Success In Latest INFORMATION TECHNOLOGY In Few Days Of 2014

ACHIEVEMENTS IN VECTOR DESIGN

The successful realization of gene therapy programs in medicine is highly dependent upon the degree of vector design development. This area of investigation has to deal with a number of pressing and complex issues in order to optimize the performance of gene transfer technology in preclinical studies and clinical research. The aspects that need to be addressed may be summarized as follows:
  • The transduction efficiency of both viral- and nonviral-based vectors must be improved. Also, the production and purification procedures for vectors must be optimized.
  • In the matter of gene delivery safety, the first rule is that vectors must not be pathogenic or toxic to the patients. For this reason, viral vectors have been engineered to be noncompetent for replication, and devoid of viral factors that may pose a hazard in humans. However, a great deal of attention is still drawn to the possibility of replication-competent virus formation in patients. Another concern is the issue of insertional mutagenesis of vectors based on retroviruses or on adeno-associated virus (AAV) type 1 or type 2. A rather new aspect that has been considered is the possible recombination between retroviral-based vectors and human endogenous retroviruses (HERVs). In order to improve the performance of gene transfer technology, viral-based vectors must be modified in order to reduce their toxicity and immunogenicity in patients. A number of significant advances have been accomplished in this respect. One study has also raised some concern about the immunogenicity of selectable markers [54], which normally derive from bacteria. Therefore, the transduction of cells of the hematopoietic lineages may lead to selectable markers entering the antigen-presenting cell pathway. This in turn would render the transduced cells susceptible to cytotoxic T lymphocyte (CTL) immune responses [54]. Indeed, this principle is the very basis of genetic immunization.
  • It is necessary to enhance the targeting and specificity of vectors to avoid unpredictable side effects due to the ectopic expression of the transgene in normal tissues. This requirement is essential to generate gene delivery systems suitable for in vivo administration. Most of the human gene therapy protocols currently rely on ex vivo gene transfer manipulations, in which certain cells or tissues must be removed from the patient, transduced in vitro, possibly selected for the expression of the transgene, and then reinfused into the patient. The entire procedure is costly and distressful for the patient. Health care systems and pharmaceutical companies would greatly benefit from the possibility of applying gene therapy approaches based on in vivo gene delivery, as the therapeutic interventions are minimally invasive, and may only require either an injection or the administration of pills [18]. Indeed, the in vivo transduction approach would also allow for a broader application of gene transfer technology in therapy. Certain pathological conditions cannot be dealt with using the ex vivo gene therapy approach, as not all cells or tissues can be surgically removed. Neurons or cardiac cells are an example. However, the in vivo gene therapy approach poses many additional safety concerns versus the ex vivo one. Recent studies have shown there is a possibility that exogenous DNA (transgene and/or viral vector sequences) may eventually be transmitted to the germ line through systematic in vivo administration of viral vectors [5556]. Sensitive nested polymerase chain reaction (PCR) techniques have allowed for the detection of low levels of exogenous viral vector DNA in the ovaries and testes of mice, which received systematic administration of adenoviral vectors [56]. Ninety-four percent of these animals tested positive for the presence of adenoviral DNA in the gonads. However, after mating the animals there was no evidence of germ line transmission of adenoviral DNA in the offspring [56]. This issue should also be addressed for in vivo retroviral- or AAV-mediated gene transfer. These viral vectors may have higher probabilities of entering the germ line, as they integrate their chimeric viral genome into host chromosomal DNA [5].
  • In many cases, the possibility of regulating transgene expression following cell transduction would be a highly desirable feature. This should allow for the activation of a transgene when it is needed, the maintenance of transgene expression within a therapeutic window, and the possibility of silencing a transgene if necessary. There have been a number of attempts to generate inducible systems. Partial successes have been reported in the in vitro system [557-62] and animal models [63-67]. However, whether transgene regulation can be achieved in patients is still an open question.
  • The possibility of combining gene-based interventions with other therapeutics has to be considered.
A broad arsenal of gene transfer systems is currently avalaible [5] and is still in expansion. The characteristics of the main vector systems are described in Table 1. Each gene delivery system has distinct characteristics and preferential applications in therapy [5]. The vectors that have already been applied in clinical trials are based on retroviruses [68-72], adenovirus [73-78], AAV [79-85], vaccinia virus [8687], canarypox virus [87], herpes simplex virus (HSV) [88], cationic liposomes [89-92], polylysine-DNA complexes [9394], and injection of naked DNA [22262730]. As anticipated, the pathological conditions with which gene therapy has dealt so far comprise: cancer [2], inherited or acquired monogenic disorders [34], AIDS [3], and cardiovascular diseases [19-21]. In addition, vectors based on vaccinia virus, canarypox virus, injection of naked DNA and other nonviral vectors have been used in the AIDS vaccination programs in the USA [22-24]. Interestingly, viral-based vectors have also been directly administered to patients in order to transduce in vivo cells that are capable of processing the transgene through the antigen-presenting cell pathway. In these cases, the transgene encodes for certain HIV-1 components. The intracellular expression of viral antigens within transduced cells facilitates the cells' antigen-presenting mechanism. In this way various viral epitopes are associated with host HLA class I antigens and expressed on the cell membrane to elicit the host's CTL immune responses [5].







VECTOR SYSTEMS BASED ON RETROVIRUSES, LENTIVIRUSES AND FOAMI VIRUS

Retroviruses have attracted a great deal of interest from the standpoint of gene transfer applications [5]. Such interest is certainly motivated by the characteristics of the biology of retroviruses, which belong to the genera of the retroviridae. This category also comprises lentiviruses and foami viruses. The retroviridae have a long history of cross-species infections [116117]. They have been responsible for many zoonotic events (transmission of infectious agents from animals to humans) [116] which indicates that they may be suitable for DNA delivery into humans. The retroviral genome is relatively simple [118], so it may easily be rearranged to generate recombinant viral vector particles which are noncompetent for replication [5], and which can sustain only one round of infection. Retroviral vectors are mainly based on the amphotropic Moloney murine leukemia virus (MLV) [118], and have been used in many gene therapy clinical trials for the treatment of cancer [25], inherited or acquired monogenic disorders [5], and AIDS [119-124]. Lentiviral vectors are based on HIV-1 [98-104] or on FIV [105-107]. Neither lentiviral- or foami virus-based vectors have been used in clinical trials yet. However, the HIV-1-based lentiviral vector system is unlikely to be approved for clinical trials for a variety of reasons. First is the issue of the serum conversion of the patients to HIV-1. Secondly, is the production and administration of lentiviral vector stocks require category three facilities. Third, the large quantities of lentiviral vector stocks that have to be produced for the clinical trials pose an additional concern in the matter of biosafety. Fourth, this vector system is already obsolete, due to the development of the FIV-based lentiviral vector system, which has circumvented all the above-mentioned issues. In fact, FIV has been certified for category two manipulations, and is based on a lentivirus which cannot infect humans. Therefore, the serum conversion to FIV does not raise any concern. The characteristics of the retroviridae vector systems are summarized in Table 1. All these viral vector systems can be produced at relatively high titers (106-107 cfu/ml) [5]. A property of retroviruses is that they can only infect dividing cells, as they need the breakdown of the nuclear membrane to be able to deliver the preintegration complex into the cell nucleus [125]. Conversely, lentiviruses [98-107] and, to a lesser extent, foami viruses [95-97126] can also infect nondividing cells. The requirement for active cell division can be either an advantage or a drawback for retroviral vectors. The selective transduction of dividing cells makes retroviral vectors suitable for cancer therapy [5]. On the other hand, retroviral vectors cannot be used for a variety of therapeutic applications, such as neurologic diseases and a number of genetic diseases that require the transduction of hepatocytes [127], as neurons and hepatocytes do not divide. In all these respects, FIV-based lentiviral vectors may find useful applications. Indeed, retroviral vectors have been used in a number of preclinical studies for liver-directed gene transfer and in some clinical trials [127]. The procedure used was based on ex vivo or in vivo transduction of hepatocytes, which were induced to proliferate by complex and artificial procedures [127]. Retroviral-mediated ex vivo transduction relies on stimulating cell division by culturing primary hepatocytes in appropriate media [127]. This approach has been employed in preclinical studies for the following genetic diseases: type I tyrosinemia, familial hypercholesterolemia and α1-antitripsin deficiency [127]. One clinical trial was conducted to treat familial hypercholesterolemia by retroviral-mediated ex vivo gene transfer. The low-density lipoprotein (LDL) receptor gene was introduced into hepatocytes that had been surgically removed from patients, and which were then reinfused into the liver following gene transduction [128129]. The procedure was safe but there was no convincing evidence of therapeutic efficacy [127]. Liver biopsies were removed after treatment, and few cells tested positive for the expression of LDL-receptor [127], indicating that the transduction efficiency was not high, or that transduced cells were lost or eliminated after reinfusion into the liver. In vivo retroviral-mediated transduction of hepatocytes is even more complicated, as it requires artificial regeneration of the liver [127]. This may be achieved by a variety of means: partial hepatectomy, chemical injury, administration of growth-stimulating drugs or vascular occlusion [127]. Experiments in animal models have shown efficient retroviral-mediated gene transfer into the liver of rodents [127], but a poor efficacy of intervention in larger animals such as dogs [127]. This is probably due to the different kinetics of liver regeneration between large mammals and rodents. In conclusion, in vivo administration of retroviral vectors into the liver does not seem applicable to humans. Probably, the development of a retroviral vector system based on the hepatitis B virus may facilitate liver-directed gene delivery. In this respect, a hepatitis B-based retroviral vector is under development [130131]. Interestingly, one study has shown successful liver-directed hepatitis B viral-mediated gene transfer of green fluorescence protein. In addition, the delivery of type I interferon by hepatitis B-based retroviral vector has suppressed endogenous wild-type virus replication in the duck model of hepatitis B virus infection [131]. However, this viral vector system needs further characterization, and should also be adapted to the rodent animal model before considering its application in clinical trials.
All the viral vectors based on retroviridae can be used to transduce a wide range of cell types. This is due to the fact that HIV-1, FIV and foami virus cores can be pseudotyped with the MLV amphotropic envelope or vesicular stomatitis virus G (VSV G) glycoprotein (Table 1) [132133]. Pseudotyping with the VSV G glycoprotein also allows for easy purification of the various viral vector particles, as they became more stable and resistant, so they can be isolated from the cell culture supernatants by simple ultracentrifugation [134]. Foami viral vectors have a broad cell tropism, even without being pseudotyped with MLV amphotropic envelopes or with VSV G glycoprotein [95-97,126]. Interestingly, wild-type foami viruses are resistant to complement-mediated lysis [95] and have a total insert capacity in the virion of approximately 14kb [95]. Conversely, MLV-based retroviral, lentiviral and foami viral vectors pseudotyped either with amphotropic retroviral envelopes or VSV G glycoprotein are susceptible to complement-mediated lysis [135-138] and their total insert capacity in the virion is in the range of 10kb [5]. It has been demonstrated that packaging cell lines expressing galactosyl(alpha1-3)galactosyl (alphaGal) sugars generate enveloped viruses that are more susceptible to complement attachment [136]. The viral systems analyzed in this study were based on VSV, HIV-2 and human foami virus [136]. It has been argued that the humoral immune response to alphaGal may be a mechanism of defense against the transmission of viral agents from animals to humans [136], and that viral vectors for human gene therapy should be produced from alphaGal-negative cells [136]. Another study has reported the production of MLV-based amphotropic retroviral vectors resistant to human complement [139]. This was achieved by expressing hybrid amphotropic envelopes on the viral membrane. These hybrid amphotropic envelopes were generated by fusing in frame the catalytic domain of the human complement regulatory protein decay-accelerating factor with a portion of the envelope [139].
The possibility of concentrating retroviral, lentiviral, and foami viral vector particles may improve the transduction efficiency for both ex vivo and in vivo applications. The protection from complement-mediated lysis is particularly required for the optimization of in vivo gene transfer models. A number of other studies have been conducted to further improve the performance of retroviral vectors in preclinical studies and clinical trials. A simple approach consists of using enhanced green fluorescence protein as reporter gene [140-143]. This allows for the rapid detection and isolation of the fraction of cells that have been transduced ex vivo. In addition, the green fluorescence protein can be readily detected in tissues following infusion of transduced cells into the animals [140141]. Other strategies to improve the retroviral transduction efficiency are based on the artificial induction of cell division. This can be achieved in many ways: preincubation of primary cultures of hematopoietic stem cells with various interleukins (IL-2, IL-3, IL-6) and/or other growth factors or colony-stimulating factors [144-146]; combination of retroviral- and lipofectAMINE-mediated gene transfer into stem cells prestimulated with IL-2 (in this study, lipofectAMINE was used to facilitate the delivery of retroviral vectors into the target cells) [147]; colocalization of retroviral particles and hematopoietic stem cells on specific fibronectin fragments (Retronectin) [148]; combination of Retronectin system with prestimulation of hematopoietic stem cells with ILs or other growth factors [149150]. Ex vivo retroviral transduction of human hematopoietic stem cells also has several disadvantages. Besides being costly and time-consuming, this approach may introduce some artifacts into hematopoietic stem cells. For instance, the in vitro culture conditions may impair the ability of transduced hematopoietic stem cells to engraft once they are reinfused into the subject. This situation has already been mentioned for the gene-based clinical trial for the treatment of familial hypercholesterolemia, in which the target cells were hepatocytes [127]. The tissue culture conditions for the ex vivo propagation and transduction of human hematopoietic stem cells are conducted at nonphysiologic cell concentrations, and require the combination of growth factors that may induce cell differentiation and, therefore, pose a limitation to the long-term engraftment of the transduced cells. It has been observed that HIV-1- and FIV-based lentiviral vectors may be more suitable for the transduction of hematopoietic cells than amphotropic retroviral vectors [98100104107]. The ability of lentiviruses to also infect nondividing cells may circumvent the issue of prestimulating hematopoietic stem cells [151]. Moreover, lentiviruses usually yield higher transduction efficiency of primary stem cell cultures than retroviral vectors [152153]. However, an important aspect that must be addressed in the matter of lentiviral-mediated gene transfer is to establish whether the transfer vector remains episomal in the nucleus of transduced cells that are in G0 phase. Transgene expression detected following lentiviral transduction of quiescent cells may indeed derive from extrachromosomal double-stranded DNA transfer vector. If this is the situation, lentiviral transduction of quiescent cells may only allow for transient expression of the transgene.
An important safety issue in the matter of viral-mediated gene transfer is the formation of viral-competent viruses in patients, which may occur by homologous recombination events within the packaging cell lines. Retroviral vector stocks are routinely monitored in clinical trials for the absence of replication-competent retroviruses (RCR) [154]. The techniques are essentially based on sensitive PCR and serological enzyme-linked immunosorbent assay [154]. In addition, retroviral stocks must be tested for the absence of endotoxins and various contaminating agents, such as bacteria and fungi, which may be acquired during the propagation of packaging cell lines or target cells [119155]. The purity of the various genetic material used in the trial must also be tested [119155]. The RCR formation is a rather unlikely event due to the design of retroviral vector. The current trend is to produce high titer retroviral vector stocks transiently [5] in order to further minimize the possibility of recombination events among the various retroviral components in the packaging cell line. These transient systems are based on three plasmid cotransfections of the highly transfectable 293T cell line [156]. As reviewed elsewhere [5], the proviral genome has been broken down into three parts, and overlapping sequences have been mostly removed. The RCR formation is unlikely due to the fact that it would require simultaneous rearrangement among three different plasmids in a specific configuration in a very limited period of time. The transfection procedure usually takes between 48 to 72 h to produce the retroviral vector stocks [5]. So far, the retroviral vectors used in clinical trials derive from conventional packaging cell lines, which were previously approved for clinical applications by the U.S. Food and Drug Administration [3157]. Studies are currently addressing the issue of generating clinical grade retroviral vector stocks by transient transfection systems [158].
Another safety concern is the possible recombination between retroviral vectors and HERVs in patients (Table 1). The human genome contains thousands of HERV sequences [159-161], most of which are defective genes. These HERV sequences derive from ancient retroviral infections [160] in which transmission occurred either in germ line cells or cells in the early embryo [160161]. About 1% of the human genome is composed of HERV-related sequences [161], and probably more than 10% of the human genome may have evolved through reverse transcription mechanism [161]. So far, only one HERV has been found that encodes for a complete viral particle, which was named HERV-K [162]. However, HERV-K is not competent for replication [162]. The biological relevance of HERVs deserves further investigation. HERVs have some possible advantageous effects in fundamental biological processes such as: development and/or differentiation, protection from superinfection by exogenous retroviruses, protection of the embryo from retroviral infection (germ line vaccination), cell fusion, tissue-specific gene expression, alternative splicing, and polyadenylation [161]. The potential pathogenicity of HERVs cannot be predicted. They may be involved in the development of malignancies and autoimmune diseases [161]. The envelope of an HERV may either protect the host from exogenous retroviral infection in a receptor interference fashion [163] or dysregulate the local cellular immunity through a superantigen-encoded region, as proposed for type I diabetes [164]. A study has observed that the multiple sclerosis-associated retrovirus detected in the plasma of patients with multiple sclerosis [165166] has high homology to an HERV [167], which was named HERV-W. Xenotransplantation techniques and gene therapy approaches based on retroviridae vectors may eventually tamper with the biology of HERVs [161]. Retroviral vectors may recombine with HERVs in patients, and generate a variety of possible adverse effects. At this point in time, we cannot predict possible adverse effects of recombination due to the lack of sufficient information about HERVs. What one can expect is the formation of RCR in patients, or the expression of HERV genes that were silent prior to gene therapy or xenotransplantation intervention. If such events should occur, most likely the subject may develop cancer or become susceptible to immune system dysregulation.
The integration of the retroviral genome into chromosomes allows for stable transgene expression. This stability is also due to the low degree of retroviral particle immunogenicity. This is in contrast with what has been observed for adenoviral-mediated gene transfer, where transgene expression is only transient. There are two reasons for the transient nature of transgene expression in adenoviral-mediated gene transfer. First, the adenoviral genome does not integrate into the host chromosomal DNA [5]. Second, the adenoviral particles are immunogenic [5] and express leaky adenoviral genes that render the transduced cells susceptible to CTL immune responses [168-171]. Stable retroviral-mediated transgene expression is desirable for the treatment of diseases that require long-term expression of the transgene, such as genetic disorders and neurologic illnesses [5]. However, the duration of transgene expression is still not optimal. This is because the retroviral long terminal repeats (LTRs) are susceptible to methylation in CpG-rich islands, which may silence the gene transcription [172173]. The incidence of this phenomenon depends on the type of transduced cells and the site of retroviral genome integration [174]. It has been shown that Sp1 binding sites may, to some extent, prevent the methylation of the promoter [175]. Retroviral vectors based on murine embryonic stem cell virus (MESV) [176177] and on murine stem cell virus (MSCV) [178] have been engineered to optimize the duration of transgene expression in undifferentiated murine embryonic and hematopoietic cells [176-178]. To this end, the LTRs of the MESV- and of the MSCV-based vectors have been modified. In the MESV vectors, the 5′-LTR contains an extra Sp1 binding site, which has been introduced by a point mutation [176177]. This has optimized, to some extent, the duration of transgene expression in embryonic and hematopoietic cells. However, silencing of transcription has been observed following the differentiation of embryonic stem cells [179]. The MSCV-based vectors, in addition to the point mutation that creates an Sp1 binding site, contain another point mutation that destroys the binding site of the embryonal LTR-binding protein (ELP) [178]. ELP is a transcriptional suppressor of the activity of the MLV 5′-LTR in undifferentiated murine embryonal carcinoma cells [180]. These modifications have further improved the performance of retroviral vectors in terms of duration of transgene expression. However, better evaluation of the exact extent of this improvement in in vivo systems is needed.
The random insertion of the retroviral transfer vector has several drawbacks: it may damage the cell genome, cause the inactivation of tumor suppressor genes, or induce the expression of cellular oncogenes. Probably, this is not sufficient to generate a neoplasia, as cancer is a multistep process which requires a combination of genetic alterations and the expression of cellular and/or exogenous oncogenic factors [181]. However, if the transduced cells should be genetically impaired by the random insertion of the viral vector's genome, this would at least predispose the cells to undergo to neoplastic transformation. To date, human gene therapy protocols have been applied only to a limited number of patients, and most of them did not have a long life expectancy. An important question is what happens if retroviral-mediated gene transfer is applied to larger scale clinical trials and subjects who have a life expectancy in the range of some decades? The current development of preventive cancer prognosis cannot answer this question, so it is not possible to properly assess the ratio of benefit to risk for all the patients. A recent study has addressed the issue of cell transformation induced by retroviral-mediated gene transfer in an in vitro system [182]. Mouse fibroblasts BALB/c-3T3 cells were transduced with a retroviral vector, and the transformation frequency was compared to that of the untransduced cells [182]. The parental cell line undergoes spontaneous transformation that is in the range of 1.1 × 10–5 [183]. In this study, the transformation rate of retrovirally transduced BALB/c-3T3 cells was in the same range [182]. The number of integrated proviral copies per cell genome varied from one to six, depending on transduction efficiency [182]. So, improved transduction efficiency is correlated with better transgene expression, which in turn is due to the higher number of integrated retroviral transfer vector's copies per cell genome. But this is also proportional to the higher risk of mutagenic events. Previous studies on retroviral-induced mutagenesis in mammalian cells have found a ratio of “mutations versus insertional events” which ranged from 10–9 to 10–3 [184-189]. Such variability indicates that the ratio of mutations per insertional events depends on the cell type and assay system. This ratio should be established for human primary lymphocytes, which normally are not retrovirally transduced as efficiently as mouse fibroblasts [182190191] or other cultured cell lines [191]. However, the lower transduction efficiency, per se, does not guarantee a lower ratio of “mutations versus insertional events” in human primary lymphocytes. All these findings suggest that in the design of clinical protocols using retroviral-mediated gene transfer, the average number of integrated viral genomes should be carefully evaluated. Such a procedure is feasible for ex vivo retroviral-mediated gene transfer, but not for the in vivo administration system.
Overall, the in vivo administration of retroviral vectors poses a number of additional safety concerns and technical limitations if compared to the ex vivo gene transfer model. To pursue the goal of safe and efficient in vivo retroviral transduction, it is necessary to generate tissue- or cell-specific retroviral vectors, which can integrate their genome in safe cell chromosomal sites. The latter issue has never been tackled, whereas the engineering of ecotropic-based retroviral vectors with altered cell tropism has attracted much attention [5], but all the attempts had little success. The chimeric retroviral particles that have been produced have a low transduction capacity [5], or even fail the gene transfer process [192]. To date, the ex vivo retroviral-mediated gene transfer model is more realistic than the in vivo one, although it is not optimal for gene therapy applications. Also, from the standpoint of safety concern, the ex vivo procedure can be more easily monitored.