MEDICINE AND SURGERY "F"
Course of LABORATORY MEDICINE
Statistical considerations

STATISTICAL BASIS OF DIAGNOSIS

THE REGISTRATION OF ATTENDANCE TO THIS ON-LINE LECTURE IS ACTIVE!
      To register your attendance please type in your surname and matricola number.
Surname:
Matricola:
Notice that your attendance will be registered only if you completed the reading, questions, and audios, and that you cannot interrupt and resume the session (but you can repeat it as many times as you like). Remember to press the [send] button before leaving this page! A confirmation message will appear at the end of this page.
      A comment section has been added at the end of this lecture. Adding a comment or question does not require registration with your matricola number, feel free to comment whenever you like.

      Diagnosis implies to assign a patient to a group, and requires a nosography, i.e. a coherent and comprehensive classification of diseases (e.g. the ICD). It will never be overemphasized that diagnosis is a complex process and that the recognition of the disease that affects the patient under study, and its correct classification, is only the first step. After this has been accomplished, the physician will ask why that specific patient succumbed to that disease, and which form that specific disease has taken in that patient: i.e. which specific and unique conditions are verified in each single case of disease. We may refer to these steps of the diagnostic procedure as the first (or general) and second (or individual) steps of diagnosis. From a historical point of view, the general perspective of diagnosis was initially suggested by Theophyle Laennec, strongly advocated by Robert Koch and finally formalized in modern terms by William Osler, the individual one by Archibald Garrod.

      Because of interindividual variability (genetic, clinical, epidemiological) both steps of diagnosis are based on statistics: we infer something on a single patient because we can assign its case to a group and we know something on the group because of our previous experience. Thus not only the general diagnosis, but also the individual one is based on statistical reasoning. In what follows, we shall go through some basic concepts of statistics that have been largely dealt with in other courses.


      HEALTH AND DISEASE
      Diagnosing a disease in a patient implies that the physician has at least an intuitive concept of health and disease; unfortunately a precise definition of these terms is surprisingly difficult. I shall give here only some schematic considerations; the student may want to consult specialized treatises (e.g. E.A. Murphy, The Logic of Medicine or G. Canguilhem, Le Normal et le Pathologique).

      Health has been defined as "the silence of the organism" or "the desirable condition of the organism". These definitions may be made more explicit by defining health as the condition of absence of suffering (absence of symptoms), long life expectancy (good prognosis), and ability to pursue one's interests and duties (adequate functioning).
      Disease is a state characterized by at least one of the following three conditions suffering (presence of symptoms), and/or short life expectancy (poor prognosis), and/or inability to cope with one's necessities and pleasure (poor functioning). Moreover, a disease is progressive, i.e. it is not a stable state but evolves spontaneously towards healing or death. A state in which any or all of the three above conditions is/are present, but which does not evolve and remains stable is usually referred to as a result of a previous disease which healed incompletely. Some examples are as follows: (i) common cold is a disease that causes symptoms and reduces functioning but has good prognosis; (ii) preclinical gastric cancer is a disease that has a very poor prognosis, but it causes no symptoms and is compatible with full functioning (until it becomes clinically evident); (iii) paralysis is the result of a brain injury, e.g. an infarction. In some cases the state of the patient is both a result and a disease: for example type I diabetes mellius is the result of the autoimmune destructione of the beta cells in the Langerhans islets of the pancreas (a result) but is progressive because it causes organ damage (a disease).
      Some diseases are acute and the patient experiences a sudden decrease of his or her well being; other are long standing or congenital (present at birth). In the case of acute diseases, the patient is aware of the existence of a state of health and wants it to be restored; in the other cases often there is no previous healthy state to be restored, but some improvement may be obtained.

      Since the possible reasons of departure from the healthy state are numerous and different from each other, several conditions fulfill the above definition of disease. A very relevant dichotomy is the following: there are diseases that are sharply separated from health, however blurred and uncertain their diagnosis may be, and diseases that are more or less continuous with health. Examples of diseases that are sharply separated from health are those due to genetic or to infectious causes: there may be no doubt that a patient suffering of Down syndrome or hemophylia or tuberculosis has a disease and belongs to a group different from that of healthy people. We may have diagnostic doubts, e.g. a culture of the sputum resulted negative for Mycobacteria; but we have no doubts that a group of patients suffering from M. tuberculosis infection is "different", in the sense defined above, from a matched group of healthy individuals.
      Examples of diseases that are not sharply separated from health include arterial hypertension, atherosclerosis, many cases of hypercholesterolemia, etc. A patient suffering of one of these conditions is not, or may not be, a member of a population different from that of healthy people: e.g. everybody has minor atheromatous plaques and disease is a matter of relative severity of a widespread condition. It is reasonable to define the former type of disease a qualitative deviation from health, the latter a quantitative deviation.
      The above reasoning has relevant consequences on diagnosis. If we suspect that our patient has a sharply defined disease, diagnosis is an act of categorization: we must decide whether he or she is a member of the healthy or of the sick population. On the contrary, if we suspect that our patient has a disease that is continuous with the healthy condition, diagnosis is a matter of assessing the gravity of his or her condition.

      Health is often confused with normality. The word normality indicates an event that obeys a rule (norm); in medicine the rule is statistical and normal is used as a synonimous of "frequent" or "common". In quantitative terms, when applicable, normal means "within two S.D. from the mean value of the parameter under consideration" and includes 95% of the population (if the parameter is distributed as a Gaussian). Normality in medicine is a very crude concept, useful for the physician who wants to know whether his or her patient requires further investigation, but nothing more. The reason why normal can be confused with healthy is simply that because of our evolutionary history, the desirable physical condition of the organism (i.e. health) is also fit, favoured by natural selection, and hence common. One should be aware of this concept, however, because of a very basic reason. Natural selection favors individuals who produce healthy, fertile and numerous children and does not care of what happens past the fertile age of life: conditions that cause suffering and death in advanced ageare not selected against. Thus, atherosclerosis is extremely frequent in humans above forty (somewhat later in women than in men), as are atherosclerosis-related diseases (arterial hypertension, ischemic parenchimal damage and so on): this is an example of a condition that is statistically "normal" and yet pathological.


      DISTRIBUTIONS: experimental error, homogeneous populations, heterogeneous populations
      The experimental error is defined as the difference between the result of a measurement and the actual value of the parameter in reality (which we should presume to be known otherwise). There are two types of experimental error, random and systematic. The random error is the difference one observes when a measurement of the same, unvarying, object is repeated several times: e.g. if we measure the weight or height of a patient, we obtain a series of values very close, yet not exactly equal, to each other. The systematic error is usually due to incorrect calibration and regulation of the instrument and causes the measured values to be systematically different by some (small) amount from the actual value: e.g. if we measure the weight of a patient using a balance that has not been zeroed correctly, we obtain a series of systematically deviated measurements. A measurement which has small random errors has precision; one which has small systematic errors has accuracy.

Fig.1: Errors of measure. The "true" value of the parameter (which we suppose known) is indicated by the red line. The green curve shows the distribution of the measurements obtained by an accurate instrument with low precision (the mean coincides with the true value but the random error is large). The blue curve shows the distribution of the measurements obtained by a precise but inaccurate instrument (the mean does not coincide with the true value but the random error is small).


      Random errors are easier to detect than systematic ones: they usually have a gaussian distribution, in which values closer to the mean are more frequent than values far from the mean. By contrast they are more difficult to explain and to prevent than systematic ones.
      Why do random errors exist at all? They have multiple subtle causes, e.g. an instrument that is operated by electrical power may give slightly different measurements of the same sample due to slight voltage fluctuation in the power line. There is no way and usually no need to eliminate random errors (provided that they are small): the only sensible thing to do is to repeat the measurement several times and assume the average of the measurements as the best estimate of the true value of the parameter.
      It is important to recall that the Gaussian (or bell-shaped) curve is described by two parameters: the mean, which defines the position of its maximum and the variance (s2) which defines its width. The variance is defined as:
s2 = Σ (Xi - mean)2 / (n-1)
where Xi is measurement i, and n the total number of measurements. Mean is obviously: Σ (Xi) / n.

      Systematic errors are difficult to detect (how can we know the "actual value" of the measured parameter if not through another measurement?), but easier than random errors to explain and to correct, since they usually result from incorrect instrumental setup. The only sensible way to detect a systematic error is to compare the readings of two (or more) different instruments or two (or more) different methods to measure the same parameter. E.g. we may measure the weigth of a sample using two different balances; the measurements must be repeated several times on each instrument and averaged to take care of random errors; if the average of the measurements obtained from the first instrument differs significantly from the average of the measurements obtained from the second instrument then either (or both) of them has a systematic error. It is important to eliminate or to minimize systematic errors: this can be obtained by proper calibration of the instruments using standard samples.
      To estimate the systematic error of a clinical test is not an easy task. In some cases it can be done by preparing an artificial sample, using the most accurate and precise instruments in our laboratory and submitting it to the standard analysis. E.g. if we measure blood electrolytes using potentiometric methods, we can prepare a solution of the desired ion or salt at known concentration by weight (the balance is the most precise and reliable instrument in the lab), and submit it to the same potentiometric measure as our blood samples. We can also add the desired ion to the blood sample (by weighting appropriate amounts of a suitable salt) and measure its concentration (so called internal standard). Given the importance of this matter, all clinical instruments are frequently tested again known standards in order to check for random and systematic errors. An important point is the following: the relevance of systematic errors is minimized if the laboratory provides its own experimentally determined estimates of the "normal" range of the clinical parameters it measures (very few clinical laboratories do so). The reason is that the significance of clinical parameters is judged by comparison with their "normal" range and if both the parameter and its range are shifted in the same direction by a common systematic error, the significance of the measurement is not influenced by the error.

      If we measure a clinical parameter in a homogeneous human population we usually find that its values are distributed, so that it has an average value (the mean) and values close to the mean are more frequent than values far from it. A plot of the frequency versus the parameter value (grouped in discrete classes of equal amplitude) yields a bell shaped (Gaussian) curve. The width of the Gaussian curve is determined by the variance of the parameter (or its square root, the standard deviation). A good example is the Intelligence Quotient, IQ, that in the healthy population has mean=100 and Standard Deviation=15 (see the blue curve in the figure below):

Fig.2: Gaussian distributions

      It is important to state that "healthy" in this context means that the members of the sample have no diagnosis for a disease affecting the measured parameter. Since the parameter is distributed, and has not the same value for all members of the population (or of the group studied) only approximately 95% of its members fall in the interval (mean - 2 S.D.) - (mean + 2 S.D.).
      Given that both the random error of the measurement and the distribution of the parameter in the population are Gaussian-shaped, and are simultaneously present in our sample of measurements, how can we distinguish between the two? We can estimate the variance of the the measurement (i.e. the amplitude of its random error) by repeating the measurement several times on the same individual(s), and comparing it with the variance of the population. As a general rule, the variance of the measurement in the population is much greater than the that of the measurement on the same individual, and when dealing with the variance of the population, we can often neglect the random error of the measurement. The opposite case, i.e. that the variance of the population is as large as the variance of the measurement (it can never be smaller), is very uncommon and either indicates that the population is made up of identical members or that our instrument is too gross to detect the differences that are present.
      Why are clinical parameters distributed, rather than identical? Several factors co-operate to this phenomenon: genetic heterogeneity of the population; environmental causes; clinical history and the effect of previous diseases of each individual; etc. It is an interesting question whether randomness may also result from purely probabilistic (stochastic) effects: this has been proven in some cases (e.g. situs viscerum inversus).

      The case of heterogeneous populations is somewhat more complex. Suppose that we test a random sample of people from the healthy population and a random sample with the same number of people from a different population, having a specific diagnosis. We end up with two Gaussian distributions with different mean and S.D. (blue and red curve in the figure above). If the two samples are mixed, the end result wil be a bimodal distribution, made up by the sum of two independent Gaussian curves (green curve, shifted vertically by 5 points to avoid superposition with the other two). E.g. in the figure above the blue curve may refer to the distribution of the IQ in a sample of 10,000 healthy people and the red curve to the distribution of the IQ in a sample of 10,000 people carrying the Down syndrome. Notice that in this example the diagnosis from the karyotype is easy and very accurate: thus we have no doubts in the assignement of each individual to his or her group.
      If we test a random sample from the human population, we shall include healthy and ill people according to the prevalence of each disease present in the population; and since ill people are usually much rarer than healthy, the distribution of the measured parameter will be again bimodal or multimodal, but the Gaussian curve corresponding to the healthy population will dominate the picture: e.g. the prevalence of all genetic defects leading to a mean IQ value of 50 or less is approximately 1% of the total population (see the red and blue curves in the picture below).

Fig.3: Gaussian distributions for populations of unequal amplitude

      It will be noticed that in the above discussion of homogeneous aned heterogeneous populations, the problem of the experimental error in the measurements has been neglected: i.e. we have assumed that the random error is much smaller than the variance of the population and have taken the parameter values obtained from the clinical laboratory as precisely corresponding to the actual values in the patient. This assumption is usually safe: the experimental errors in the measurements are small with respect to the variability of the population. E.g. most measurements of concentration of blood solutes are accurate to within 2-3% of the actual value and when we say that the glycemia of a patient is 80 g/dL our confidence interval is in the order of 77-83 g/dL. However, not all clinical parameters have the same accuracy and we must be aware of possible errors in their measurement. Two conditions are special and require specific mention i.e.: (i) measurements that affect the value of the parameter of interest and (ii) the presence of confounding or interfering variables.
      An example of a measurement that affects the parameter being measured is the recording of the blood pressure. In some individuals, the act of measuring the blood pressure causes psychological stress, and this in turn causes an autonomous response that increases the blood pressure (so called white coat hypertension). In this case we cannot resolve random and systematic errors of the measurement nor can we obtain a reliable estimate of the "normal" blood pressure of the patient. The best way to operate is to find a procedure that minimizes this effect (e.g. we can instruct the patient to measure his or her blood pressure by himself, using an automated recorder).
      Confusing variables produce a signal indistinguishable from the variable of interest. E.g. the gravimetric measurement of oxalic acid in the urine is easily achieved by adding calcium chloride, allowing calcium oxalate to precipitate and weighting the dessiccated precipitate on a balance. This method is precise but if the urine contains any other anion whose calcium salt precipitates as well (e.g. phosphate) this will be confused with oxalate and will cause the amount of the latter to be overestimated. The presence of confusing variables should be carefully searched for, since they cause a systematic error difficult to detect; hopefully each test has a known and finite number of potential confusing variables (in some cases zero).

      Some of the clinical parameters of a disease that is sharply distinguishable from the health condition will present a bimodal distribution in the general population: i.e. they will distribute in a (larger) Gaussian for the healthy group and a different (smaller) Gaussian for the sick group, even though some conditions (e.g. relative numerosities of the two groups) may mask the bimodality (Fig.3, above). In the case of diseases which are not sharply distinguished from the health condition, the distribution of the clinical parameters in the population is Gaussian and unimodal, and the disease group is identified as a "tail" of the distribution. An interesting example is that of arterial hypertension, that may be essential (i.e. idiopathic, its cause being unknown), or may be due to some identified cause (e.g. pheochromocytoma). The corresponding distributions being as in Fig.4:

Fig.4: Distributions of diastolic pressure in the cases of essential hypertension and pheochromocytoma

      How strong is the distinction between qualitative (i.e. sharply distinguished) and quantitative (i.e. continuous) deviations from the condition of health? Not very much. Indeed given that a quantitative deviation from health like essential hypertension has genetic and environmental factors, one may imagine that in the future we shall be able to identify precise causes and transform it into one or more sharply demarcated disease(s).


      CERTAIN AND PROBABLE DIAGNOSES
      Up to now we have considered the possible distributions of a measurable physiological parameter in the healthy, ill and mixed population, under the assumption that diagnosis could be made with certainty independently of our measurement. This is rarely the case: most often the parameter is a clue to the diagnosis, and we have to evaluate its clinical significance.
      Before going further, let us distinguish diagnoses that are (almost) certain from diagnoses that are only probable. Some diseases are defined precisely enough to allow the physician to establish a diagnosis that is absolutely unequivocal. This is the case of most genetic diseases, e.g. the Down syndrome of the above example; of most infectious diseases, in which the presence of the causating agent can be demonstrated with certainty; of cancers, that can be ascertained bioptically; etc. In these cases the physiological parameters we measure give an indication for a definitive test, whose result confirms the diagnosis.
      There are diseases in which an absolute diagnosis is impossible or at least not always possible. In general these diseases have no unequivocal histological or genetic marker (e.g. this is the case for most psychiatric diseases); moreover their cause is often unknown and several factors may cooperate; finally their gravity and prognosis may be highly variable (e.g. arterial hypertension). Even diseases which admit a certain and unequivocal diagnosis may under many instances be subject to uncertainty diagnosis: e.g. a cancer marker may be present in the blood of our patient but the tumour may be too small to be found and subjected to biopsy. In all these cases we formulate a probability diagnosis, i.e. we try to assess how likely it is that the patient is affected by a specific disease. Probability in this case estimates how confident we are in our diagnosis: the uncertainty lies in physician, not in the body of the patient; and we may increase our confidence by carrying out further tests.


      MULTIPLE DIAGNOSES
      In some cases the patient suffers of more than a single disease and we must establish multiple diagnoses. Since the incidence of acute diseases is usually quite low and their duration is short, the coexistence in the same patient of two acute diseases independent of each other is an uncommon occurrence. By contrast the coexistance of two unrelated diseases one chronic, the other acute or both chronic is not infrequent, especially in the elder.
      If we think that the patient suffers of two diseases at the same time, it is also important to establish whether or not they are correlated to each other: e.g. an acute episode of measles may cause the relapse of a previously silent tuberculosis. This is due to the temporary immunodeficiency due to measles that reduces the defences against the colonies of Mycobacterium tuberculosis already present in the lung or elsewhere in the body.


      DISTRIBUTIONS IN RELATION TO DISEASES
      As a general rule, when a disease admits an unequivocal diagnosis, its characteristic parameters exhibit a bimodal or multimodal distribution in the population, or, to be more rigorous, the human population is made up of a healthy and a hill subpopulation, each with its normal distribution of physiological parameters. The science philosopher Georges Canguilhem summarized this condition with the following definition: "there is a norm for health and one for (each) disease; and the two norms differ from each other". In these cases the physician uses the clinical parameters to assign his patient to its characteristic subpopulation. In the classical view of medicine, as championed by the great physicain William Osler, this assignement is the diagnosis. If and when a certainty diagnosis can be made, the two gaussians that represent the relevant diagnostic parameter in the healthy and ill groups are well separated from each other, with minimal or absent superposition: e.g. all patients suffering of Down syndrome have a trisomy of chromosome 21, at least a partial one; whereas all healthy people have no trisomy.
      Diseases that only admit a probability diagnosis may take two very different forms:
(i) the relevant clinical parameter(s) have a bimodal distribution in the population, but the separation between the ill and healthy groups is incomplete (see the figures above); or
(ii) the relevant clinical parameter(s) have a monomodal, Gaussian distribution and the disease affects those individuals who present extreme values. An example of this condition is hypercholesterolemia.
      The former condition is more frequent.


      BAYESIAN STATISTICS
      The most common condition faced by the clinician is the following: the patient presents some clinical parameters which are far from the mean value of the population, yet compatible with both illness and health (absence of a specific diagnosis). Is a diagnosis justified in his or her case? The answer to this question is a matter of probability and relies on the theory developed by the british mathematician Thomas Bayes (1702-1786).
      The textbook example of Bayes formula is the following: we have two boxes, each containing 100 balls. Box 1 contains 90 white and 10 red balls; box 2 contains 10 white and 90 red balls. One ball is picked up; how likely is it to come from box 1? This is the a priori probability and in the present case equals 50%, given that the two boxes contain the same number of balls and each has the same probability of being picked up. If we are told that the ball is red, can we refine our estimate? The answer is yes: since the ball is red, we ignore from our calculation all the white ones, and there are only 10% probability that the ball comes from box 1: this is because the system contains only 100 red balls, 10 in box 1 and 90 in box 2. Our new estimate is the ex post or post test probability. Often, we can add more tests and refine our estimate further.
      How does this example compare with medicine? Imagine to be the only physician on an island inhabited by 200 people, half of whom suffer of malaria. 90% of the people suffering of malaria have recurrent fever; 10% have not (these are the atypical cases of malaria: malignant, blackwater fever, cerebral). Among the people who do not have malaria only 10% have recurrent fever (e.g. because of infection from Borrelia recurrentis). A patient comes to your ward: how likely he is to have malaria? The answer is the a priori probability: 50%. He refers recurrent fever: how does your estimate change? The answer is the post test probability: 90%.
      A comparison between the two examples is as follows:
Box 1
contains 100 balls
90 white
10 red
Group of malaria-free
is made up of 100 people
90 refer no recurrent fever (true neg.)
10 refer recurrent fever (false pos.)
Box 2
contains 100 balls
10 white
90 red
Group of malaria-sick
is made up of 100 people
10 refer no recurrent fever (false neg.)
90 refer recurrent fever (true pos.)


      Let's now consider a statistically more plausible, but still intuitive example: suppose that a patient has an IQ of 55 and that the distribution of the IQ in the population is described by the green curve in Fig.3: this patient might be an uncommon healthy individual or may suffer of some specific disease. How can we decide? We have a population of 10,100 individuals belonging to two groups, one of which hosts 10,000 healthy individuals, the other is composed by 100 people suffering of some specific disease affecting the IQ. 9,950 people from the healthy group have IQ>55, and only 50 people of this group have IQ<55. In the disease group 20 individuals have IQ>55 and 80 have IQ<55. We may consider three questions:
(i) Prior to any analysis, how likely is a member of the population to belong to the disease group? The answer to this question is Pdisease, a priori = numerosity of the group / numerosity of the population = 100/10,100 = 0,0099 or 0,99%
(ii) Prior to any analysis, how likely is a member of the population to have an IQ<55? Since the total number of individuals with IQ<55 in the population is 50+80=130, the answer to this question is PIQ<55, a priori = 130/10,100 = 0.013 or 1.3%.
(iii) The patient scored IQ<55. How likely the patient is to belong to the disease group? Since of the 130 members of the population with IQ<55 only 80 belong to the disease group, we have Pdisease, post test = 80/130 = 0.61 or 61%. In this calculation, we ignore all members of the population with IQ>55, given that our patient does not fall among them.

      The third question is our initial one: the likelyhood of a condition requiring diagnosis in the present case is 61%. The first and second questions have been considered to demonstrate the strength of our test, and because their answers will turn out useful in the following discussion.

      The general formula for a posteriori (or post test) probability is given by Bayes' rule:
P(H|E) = P(H) x P(E|H) / P(E)
Where: P(H|E): probability of the hypothesis H (in our example of disease) in the presence of condition E (in our example IQ<55);
P(H): probability of the hypotesis prior to any test (in our example prevalence of disease in the entire population, 0.0099 or 0.99%);
P(E|H): probability of condition E if hypothesis H is true (in our example frequency of IQ<55 in the disease group, 0.8 or 80%);
P(E): probability of condition E in the population (in our example frequency of IQ<55 in the entire population, 0.013 or 1.3%).

If we apply Bayes' formula to the data of our example we obtain:
P(H|E) = P(H) x P(E|H) / P(E) = 0.8 x 0.0099 / 0.013 = 0.61


      We may want to consider a graphical representation of our example: this is reported in figure 4 (which is an enlarged and modified portion of fig.3):

Fig.4: Position of the patient's score at test


      Clearly, our example leaves something to be desired: indeed we arbitrarily divided our population and its groups according to the rule of thumb IQ<55, but we could greatly gain in precision setting a more precise condition, e.g. 50<IQ<60. However, Fig.3 makes it clear that, at least some values of the measured parameter, are compatible with both health and disease, even though with different probability.

      Practical considerations on Bayes' formula
      Application of Bayes formula to clinical reasoning has some peculiarities. The first and most important problem is that usually we know the required parameters with scarce precision, and often with reference to a population that may not be the same as the patient's: e.g. P(H) is the prevalence of the disease in the whole population to which the patient belongs at the time in which we see the patient. This parameter is known with reasonable precision for common chronic diseases (e.g. the various forms of diabetes), much less for other types of diseases. The parameters P(E), frequency of the condition in the general population (symptom, laboratory finding) is usually known with great uncertainty, as is P(E|H) (frequency of the condition in the subpopulation of people affected by the disease). The ratio P(H|E) / P(E) is usually greater than unity (the symptom is more common in the sick than in the healthy subpopulation; there may be exceptions to this rule), thus P(H|E) > P(H).
      The second problem for diagnosis is that healthy (i.e. negative) people are usually much more frequent than sick (positive) people. This gives negative cases a tremendous advantage in Bayes' formula because, no matter how strong the association between H (disease) and E (symptom)the absolute probability of H, P(H) is low. E.g. if the symptom is present in 99% of the cases of disease and in only 1% of the healthy population, but the frequency of disease is of 1 case in 10,000 population, Bayes' formula tells us that the probability of disease in a person presenting the symptom is P(H/E) = 0.495. This follows from the fact that in a population of one million individuals there are 199 persons presenting the symptom, 99 affected by the disease and 100 healthy.
      How can we increase the likelyhood of our diagnostic hypothesis? The answer is we need to increase the ratio P(E|H)/P(E). To obtain this result we use more criteria (symptoms). If we can associate two symptoms to the same disease, e.g. fever and cough, we can relace P(E) and P(E|H) with the product of two probabilities, i.e. P(E1,2)=P(E1)xP(E2) and P(E1,2|H)=P(E1|H)xP(E2|H). Given that all Ps are lower than unity we the products are lower than either of their factors, but the decrease in P(E1,2) will be much stronger than the decrease in P(E1,2|H), i.e. P(E1,2|H)/P(E1,2) >> P(E1|H)/P(E1).
      Thus we formulate the take-home message: the physician should never rely on a single test or symptom to formulate a diagnosis, but should consider groups of symptoms.

      In many cases we cannot quantitatively apply Bayes' formula because of lack of precise estimates of the parameters, especially of P(E). For example we may have estimates of P(E) for a population different from the one to which our patient belongs. However, we can always adopt a semiquantitative reasoning, based on Bayes theorem: given that P(H) is low due to the sheer number of healthy people, even a gross estimate of P(E), provided that it is non-zero, tells us that more than a single diagnostic criterium is necessary, and that three or four are usually enough to give us some confidence in our diagnosis. Clinical reasoning must often rely on incomplete information, and an increase in the diagnostic criteria may partially compensate.

      Differential diagnosis
      In many cases, semiquantitative application of Bayes' formula based on two or more positive criteria will tell us that the patient does not belong to the healthy population. However, more than a single disease may explain the presence of the criteria, i.e. more than a single diagnosis is possible. E.g. a person presenting fever, cough, and blood in the sputum is highly unlikely to belong to the healthy population even though we may not have precise estimates of the frequency of these events in the general population (sick + health people). However, several diagnostic hypoteses may explain this clinical picture, e.g. pneumonia, tuberculosis, and lung cancer. We can separately apply Bayes' formula to each of our diagnostic hypothesis and estimate the relative probabilities: P(pneumonia|fever, cough, and blood in the sputum), P(tuberculosis|fever, cough, and blood in the sputum) and P(lung cancer|fever, cough, and blood in the sputum).
      In some cases one hypothesis will be overwhelmingly more likely than the others; in other cases further investigation will be required. Bayes' formula can be applied also to differential diagnosis, i.e. to the discrimination among equally plausible diagnostic hypotheses. To carry out differential diagnosis we add further criteria, e.g. a chest X-ray examination or the culture of the sputum, and again apply Bayes' formula, except that P(E) in this case is not the probability of a positive criterium in the general population, but the probability of a positive criterium in the sum of the sick populations we have identified.
      We again have a take-home message: the largest the group of diseases we have selected in the first step of the diagnostic procedure, the higher the likelyhood that it includes the correct diagnosis to be discovered by the differential diagnosis.


      THRESHOLDS
      Since a complete statistical analysis of overlapping gaussians in a multimodal distribution is complex, and requires more information than it is usually available, physicians usually define threshold values for clinical parameters. Values beyond the threshold require attention and may probably imply the necessity of a diagnosis. It is important to remark that the threshold is an arbitrary value between the mean value of a parameter in the healthy group and its mean in the disease group. Depending on the parameter chosen the mean of the healthy group may be higher or lower than that of the disease group. Thus in some cases (e.g. the IQ) the presence of illness is more likely if the parameter value is below the threshold; in other cases (e.g. bilirubin concentration in the blood) the presence of illness is more likely if the parameter value is above the threshold.
      As evident from Fig.3 any threshold value will include members from the healthy group and/or exclude members of the disease group. E.g. suppose that we take IQ=55 as a sensible threshold, implying that any individual with IQ<55 requires further study: this threshold will exclude 20% of the members of the disease group, having IQ>55, who will not be further studied and thus wil not be diagnosed, and 0.05% of the members of the healthy group, for whom a diagnosis will be uselessly searched for.
      In the clinical jargon we call positive (i.e. potentially ill) all values falling on the "unexpected" side of the chosen threshold (e.g. IQ<55); positive values may or may not be due to illness and negative values may or may not be associated to health: we call true positives the values on the unexpected side of the threshold that when further indagated lead to a diagnosis and false positives those which do not lead to a diagnosis. In the same way we call negative all values on the "expected" side of the threshold; true negatives if no illness is present, false negatives in the opposite case. In summary:
True positive: Sick people correctly diagnosed as sick
False positive: Healthy people incorrectly identified as sick
True negative: Healthy people correctly identified as healthy
False negative: Sick people incorrectly identified as healthy

  Test result: NEGATIVE
(value on the expected
side of the threshold)
Test result: POSITIVE
(value on the unexpected
side of the threshold)
DISEASE ABSENT TRUE NEGATIVE FALSE POSITIVE
DISEASE PRESENT FALSE NEGATIVE TRUE POSITIVE


Fig.4: Positive and negative test results

      The existence of false positives and false negatives is obviously unpleasant: medicine would be simpler if we could eliminate these and unequivocally associate positive to illness and negative to health. This occurs in the cases described above of certainty diagnoses, and depends on a negligible or absent overlapping of the gaussian distribution of clinical parameter values' in the healthy and disease groups. In all other condition, however false positives and false negatives occur. By accurately deciding the threshold value we can reduce and even abolish either false result, but at the expense of an increase of the frequency of the other false result. E.g. in the case of the IQ we can minimize the frequency of false negatives by increasing the threshold to IQ=80, but such a high threshold will cause a high frequency of false positives (refer to Figs.3 and 4).


      SENSITIVITY AND SPECIFICITY OF TESTS
      Each clinical test should be evaluated for its diagnostic significance, keeping in to account its ability to discriminate health and disease. Unfortunately, even if we knew exactly how reliable our tests are, a correct evaluation of their results also requires information about the incidence and prevalence of the disease we are looking for in the population. The test characteristics we consider are:
Propertydefinitionpractical significanceexample
Accuracy (true positives + true negatives) / total number of measurementsHigh accuracy means that the test result, whether positive or negative, has high correlation with the correct diagnosis90% accuracy means that the test gives the "correct" result 90 times over 100
Predictive Value (precision)true positives / all positives =
true positives / (true positives + false positives)
High predictive value means that a positive result is probably true: if the test is positive, the subject is probably sick. However, a negative results may possibly be false. 90% predictive value means that if we obtain 100 positive responses, 90 of them will correspond to sick individuals and 10 to healthy subjects (or subjects suffering of a different disease)
Negative Predictive Valuetrue negatives / all negatives =
true negatives / (true negatives + false negatives)
High negative predictive value means that a negative result is probably true: if the test is negative, the subject is probably healthy. However, a positive results may possibly be false.90% negative predictive value means that if we obtain 100 negative responses, 90of them will correspond to healthy individuals and 10 to sick individuals, wrongly indicated as healthy
Sensitivity true positives / sick individuals tested =
true positives / (true positives + false negatives)
High sensitivity means that a high percentage of sick people tests positive, but the test may have false positives: if test is negative the subject is probably healthy, if the test is positive investigate further100% sensitivity means that there are no false negatives: all negatives are true. If a person tests negative he/she is healthy. However, there may be false positives: if a subject tests positive he/she should be investigated further.
Specificity true negatives / healthy individuals tested =
true negatives / (true negatives + false positives)
High specificity means that a high percentage of healthy people tests negative, but false negatives are possible: if the test result is positive, the subject is probably sick, if negative investigate further.100% specificity means that all healthy subjects test negative and there are no false positives. Sick individals may test negative as well (false negatives). Thus a positive result indicates disease, a negative one might require further investigation.


      These characteristics are not independent from each other: e.g. sensitivity and specificity depend on the same threshold, thus one cannot increase the one without decreasing the other. More refined correlations may be written down if one knows the prevalence of the disease in the population (in this context we define prevalence = number of sick individuals / total population, instead of its more rigorous definition as the number of old and new cases per year per million population). As a general rule sensitivity and specificity can be considered intrinsic characteristics of the test, because it is relatively easy to test a number of sick and healthy people and establish these parameters; by contrast predictive value and negative predictive value depend on the prevalence of the disease, because they mix healthy and sick people (see the demonstration below).
E.g. accuracy estimates how often the test yields a true result, be it positive or negative, and, if the entire population (or a large random sample) has been tested, bears the following relation to specificity and sensitivity:
Accuracy = sensitivity x prevalence + specificity x (1-prevalence)
The above formula demonstrates that, when the entire population is tested, prevalence has a large effect on accuracy. This depends on the obvious fact that the groups of ill and healthy people usually differ greatly in numerosity (see above). To compensate for this effect, we define the balanced accuracy, i.e. the accuracy the test would have if the prevalence of the disease were 0.5:
Balanced Accuracy = (sensitivity + specificity) / 2

Prevalence is necessary for the interconversion of the test parameters, e.g. we may redefine:
sens.=true positives / N x prevalence (where N=number of people tested)
1-spec.=false positives / N x (1 - prevalence)
these transformations allow us to write:
precision = sens. x prev. / [sens. x prev. + (1-spec.) (1-prev.)]
      The choice of which parameters are dependent on disease prevalence (sensitivity and specificity or precision and negative predictive value) is arbitrary: either couple can be measured independently of prevalence, but the other couple will be dependent. To state this concept more precisely: we can define precision as a function of prevalence, specificity and sensitivity or, equally well, sensitivity as a function of prevalence, precision and negative predictive value.

      Laboratory tests can be used for different applications, ranging from diagnosis to population screenings and statistics. Depending on the intended applications, some test characteristics may be more or less desirable, as summarized in the table below:
Characteristics of test Suggested use test
positive
test
negative
high accuracy
(few false + and false -)
All Probably sick Probably healthy
high sensitivity
low specificity
(few false -)
Secondary prevention
(large scale screening)
Possibly sick
(investigate further)
Probably healthy
high specificity
low sensitivity
(few false +)
Diagnostic tool Probably sick Possibly healthy
(investigate further
if necessary)

      Effect of prevalence
      Even a very effective test may provide unreliable results if the prevalence of the disease is low (as it usually is). Let's imagine a test (e.g. a serological test) that has 98% specificity and 98% sensitivity used to test a population of 100,000 subjects for the epidemiological screening of a disease whose expected prevalence is 1% (a similar case occurred in 2020 during the screening for sub-clinical infection of Covid-19 in Santa Clara county, CA, USA). Our population is expected to include 99,000 healthy and 1,000 sick subjects. From the definition of sensitivity we derive:
true positives = 0,98 x 1,000 = 980; false negatives = 1,000-980 = 20
From the definition of specificity we derive:
true negatives =0,98 x 99,000 = 97,020; false positives = 99,000-97,020 = 1,980
Thus our test, in spite of a high accuracy (97,020 + 980) / 100,000 = 98 % has yielded 1,980 false positives and 980 true positives, with a low precision of 980 / (1980 + 980) = 0.33 = 33% !
We may be partially reassured by realizing that the negative predictive value of our test, under these conditions is excellent: 97,020 / (97,020 + 20) = 0.9998 = 99.98% ! (indeed predictive value and negative predictive value are inversely correlated).
      It is important to state that these effects are entirely dependent on the prevalence of the disease in the population and not on the technical characteristics of the test. During an epidemic, or for pathological conditions that may attain high prevalence in some populations (e.g. arthrosis in the elderly) the same test may provide much more satisfactory results: the table below compares the same test for two different values of prevalence (population size = 100,000):

Results of a test with 98% sensitivity and 98% specificity in two populations of 100,000 individuals each,
having different prevalence of the same disease
 prevalence=1%prevalence=10%
sick people1,00010,000
true positives9809,800
false negatives20200
 
healthy people99,00090,000
true negatives97,02088,200
false positives1,9801,800
 
predictive value (precision;
true positives / all positives)
0.330.84
negative predictive value
(true negatives / all negatives)
0.99980,998

      Measurements of prevalence
      This is an important application of testing, e.g. to follow the course of an epidemic. Let's imagine that we have a test whose specificity and sensitivity are both 98% (measured from surveys on cases of certain diagnosis), and that we test random samples of three different populations (e.g. from three different countries) and we obtain 5%, 10%, and 15% positives respectively; we know that the lower the number of positives, the lower the prevalence, but also the greater the relative weight of false positives. The formula to calculate prevalence is as follows:
Prevalence = [fraction positives - (1-spec.)] / (sens. + spec. - 1)
whose application to the above example yields:
fraction positivesprevalenceprecision
0.050.0313 (3.13%)0.625
0.100.083 (8.33%)0.833
0.150.135 (13.5%)0.90
and we observe that the precision (true positives / total positives) of our estimate increases as the prevalence increases, as expected; thus the lower the fraction of positives, the greater the importance of the appropriate correction.

      UNDESIRABLE NORMAL CONDITIONS; MORAL AND ECONOMICAL CONSIDERATIONS
      All the above considerations deal with the hypothesis that sick and healthy people belong to two different groups of the population, and that each patient can be assigned to his or her proper group by means of the opportune clinical tests. There may be, however, conditions that are undesirable and entail an unfavorable prognosis even thoough they fully belong to the "normal" range of parameters. Since these conditions can often be treated it is important to recognize them. An example is hypercholesterolemia. There are several diagnosticable genetic diseases that cause such condition. However, also in the absence of any of these, there are individuals whose blood cholesterol concentration is high . These individuals represent the tail of a Gaussian distribution and properly speaking do not require a diagnosis: they are healthy individuals whose blood cholesterol concentration is 2 or 3 SD above the mean value. In spite of belonging to the normal population, these individual risk all the unwanted consequences of hypercholesterolemia, as ill people do: i.e. the complications of hypercholesterolemia (e.g. atherosclerosis) do not depend on the genetic disease that may or may not be present, but on the actual concentration of cholesterol in the blood. Thus people whose blood cholesterol exceeds some consensus value (200 to 230 mg/dL) should be treated even in the absence of a genetic diagnosis of hereditary hypercholesterolemia. It is important that the physician is aware of the conceptual difference of diagnosing and treating the members of the disease group or the tails of the healthy group Gaussian.

      The use made above of the concepts of health and disease implies some ethical considerations: indeed if an individual belongs to the healthy group and yet his or her clinical parameters present severe deviations from the average values, one may consider whether he needs therapy nevertheless. There is no general rule on this point, but the widespread consensus holds that if an effective symptomatic therapy exists this should be extended to all individuals who may benefit from it, whereas causal therapies should be prescribed, and will only function, according to the diagnosis. E.g. all people with an IQ<50 (or 70) may greatly benefit from receiving specific attention and care by specifically trained staff, at school and elsewhere, irrespective of the diagnosis; on the other hand a specific therapy will benefit only people suffering of a given disease (e.g. a diet low in phenylalanine and tyrosine is a specific cure of phenylketonuria and the low IQ due to this disease, but will not cure other clinical conditions).

      Ethics is also to be considered in the case of thresholds: if we are dealing with a lethal but curable disease (e.g. appendicitis; typhoid; most early cases of cancer; etc.) it is sensible to minimize false negatives by setting a threshold closer to the mean of true negatives. This is because the consequences of neglecting a diagnosis may be fatal, whereas the increase of false positives will only cause to these patients the inconvenience of further analyses which will confirm that the disease is not present. However, the physician must use great care to avoid an unnecessary intervention: e.g. operating a false positive for appendicitis.


Questions and exercises:
1) Precision is the attribute of a test that:
has small systematic errors
has small random errors
has accuracy

2) Bayes' formula states that post-test probability is:
directly proportional to the pre-test probability of the hypothesis and to the probability of the condition if hypothesis is true, and inversely proportional to the probability of the condition in the population.
directly proportional to the pre-test probability of the hypothesis and inversely proportional to the probability of the condition if hypothesis is true, and to the probability of the condition in the population.
directly proportional to the pre-test probability of the hypothesis, to the probability of the condition if hypothesis is true, and to the probability of the condition in the population.

3) The sensitivity of a test is defined as:
true negatives / healthy individuals tested
true negatives / sick individuals tested
true positives / sick individuals tested

4) For a chronic disease:
Prevalence is higher than incidence
Incidence is higher than prevalence
Incidence and prevalence are identical

your score: 0
Attendance not registered because matricola was not entered.

You can type in a comment or question below (max. length=160 chars.):



All comments posted on the different subjects have been edited and moved to
this web page (for optimal reading try to have at least 80 characters per line)!

Thank you Professor (lecture on bilirubin and jaundice).

The fourth recorded part, the one on hyper and hypoglycemias is not working.
Bellelli: I checked and in my computer it seems to work. Can you better specify
the problem you observe?

This Presentation (electrolytes and blood pH) feels longer than previous lectures
Bellelli: it is indeed. Some subjects require more information than others. I was
thinking of splitting it in two nest year.

Bellelli in response to a question raised by email: when we compare the blood pH
with the standard pH we do not mean to compare the "normal" blood pH (7.4)
with the standard pH. Rather we compare the actual blood pH of the patient, with
the pH of the same blood sample equilibrated under standard conditions.
Thus, if we say that standard pH is lower than pH we mean that equilibriation with
40 mmHg CO2 has caused absorption of CO2 and has lowered the pH with respect
to its value before equilibration.

(Lipoproteins) Is the production of leptin an indirect cause of type 2 diabetes since
it works as a stimulus to have more adipose tissue that produces hormones?
Bellelli: in a sense yes, sustained increase of leptin causes the hypothalamus to adapt
and to stop responding. Obesity ensues and this in turn may cause an increase in the
production of resistin and other insulin-suppressing protein hormones produced by the
adipose tissue. However, this is quite an indirect link, and most probably other factors
contribute as well.

(Urea cycle) what is the meaning of "dissimilatory pathway"?
Bellelli: a dissimilatory pathway is a catabolic pathway whose function is not to produce
energy, but to produce some terminal metabolyte that must be excreted. Dissimilatory
pathways are necessary for those metabolytes that cannot be excreted as such by the
kidney or the liver because they are toxic or poorly soluble. Examples of metabolytes
that require transformation before being eliminated are heme-bilirubin, ammonia,
sulfur and nitrogen oxides, etc.

Talking about IDDM linked neuropathy can be the C peptide absence considered a cause of it??
Bellelli: The C peptide released during the maturation of insulin, besides being an indicator
of the severity of diabetes, plays some incompletely understood physiological roles. For
example it has been hypothesized that it may play a role in the reparation of the
atherosclerotic damage of the small arteries. Thus said, I am not aware that it plays a direct
role in preventing diabetic polyneuropathy. Diabetic neuropathy has at least two causes: the
microvascular damage of the arteries of the nerve (the vasa nervorum), and a direct
effect of hyperglycemia and decreased and irregular insulin supply on the nerve metabolism.
Diabetic neuropathy is observed in both IDDM and NIDDM, and requires several years to
develop. Since the levels of the C peptide differ in IDDM and NIDDM, this would suggest
that the role of the C peptide in diabetic neuropathy is not a major one. If you do have
better information please share it on this site!

In acute intermitted porphyria and congenital erythropoietic porphyria why do the end product
of the affected enzymes accumulate instead of their substrate??
Bellelli: First of all, congratulations! This is an excellent question.
Remember that a condition is which the heme is not produced is lethal in the foetus; thus
the affected enzyme(s) must maintain some functionality for the patient
to be born and to come to medical attention. All known genetic defects of heme
biosynthesis derange but do not block this metabolic pathway.
Congenital Erythropoietc Porphyria (CEP) is a genetic defect of uroporphyrinogen
III cosynthase. This protein associates to uroporphyrinogen synthase (which is present
and functional in CEP) and guarantees that the appropriate uroporphyrinogen isomer is produced
(i.e. uroporphyrinogen III). In the absence of a functional uroporphyrinogen III
cosynthase other possible isomers of uroporphyrinogen are produced together with
uroporpyrinogen III, mostly uroporphyrinogen I. The isomers of uroporphyrinogen
that are produced differ because of the positions of propionate and acetate side chains,
and this in turn is due to the pseudo symmetric structure of porphobilinogen. Only
isomer III can be further used to produce protoporphyrin IX. Thus in the
case of CEP we observe accumulation of abnormal uroporphyrinogen derivatives, which, as
you correctly observed are the products of the enzymatic synthesis operated by
uroporphyrinogen synthase.
The case of Acute Intermittent Porphyria (AIP) is similar, although there may be variants
of this disease. What happens is that either the affected enzyme is a variant that does not
properly associate with uroporphyrinogen III cosynthase or presents active site mutations
that impair the proper alignement of the phoprphobilinogen substrates. In either case
abnormal isomers of uroporphyrinogen are produced, as in CEP.
Also remark that in both AIP and CEP we observe accumulation of the porphobilinogen
precursor: this is because the overall efficiency of the biosynthesis of uroporphyrinogens is
reduced. Thus: (i) less uroporphyrinogen is produced, and (ii) only a fraction of the
uroporphyrinogen that is produced is the correct isomer (uroporphyrinogen III).


is it possible to take gulonolactone oxidase to synthesize vitamin C
instead of vitamin C supplement?
Bellelli: no, this approach does not work. The main reason is that
the biosynthesis of vitamin C, as almost all other metabolic processes, occurs intracellularly.
If you administer the enzyme it will at most reach the extracellular fluid but will not be
transported inside the cells to any significant extent. Besides, there are other problems
in this type of therapy (e.g. the enzyme if administered orally, may be degraded by digestive
proteases; if administered parenterally, may cause the immune system to react against a
non-self protein). In theory one could think of a genetic modification of the inactive human
gene of gulonolactone oxidase, but the risk and cost of this intervention would not be
justified. In addition to these considerations, except for cases of shipwreckage or
other catastrophes, a proper diet or administration of tablets of vitamin C is effective,
risk-free and unexpensive, thus no alternative therapy is reasonable. However, I express my
congratulations for your search on the biosynthesis pathway of ascorbic acid.


Resorption and not reabsorption would lead to hypercalcemia ie bone matrix being broken down.
Bellelli: I am not sure to interpret your question correctly. Resorption indicates destruction of the bone matrix and release of calcium and
phosphate in the blood, thus it causes an increase of calcemia. Reabsorption usually means active transport of calcium from the renal tubuli to the blood, thus
it prevents calcium loss. It prevents hypocalcemia, and thus complement bone resorption. To avoid confusion it is better use the terms "bone resorption" and "
renal reabsorption of calcium". If you have a defect in renal reabsorption, parthyroid hormone will be released to maintain a normal calcium level by means of
bone resorption; the drawback is osteoporosis.

In Reed and Frost model: I haven't understood what is the relationship
between K and R reproductive index. Thank you Professor!
Bellelli: in the Reed and Frost model K is the theoretical upper limit of
R0. R the reproductive index is the ratio (new cases)/(old cases) measured after
one serial generation time. R0 is the value of R one measures at the beginning
of the epidemics, when in principle all the population is susceptible.

What is the link between nucleotide metabolism and immunodeficiencies and mental retardation?
Bellelli: the links may be quite complex, but the principal ones are as follows:
1) the immune response requires a replication burst of granulocytes and lymphocytes, which in turn requires
a sudden increase of nucleotide production, necessary for DNA replication. Defects of nucleotide metabolism
impair this phase of the immune defense. Notice that the mechanism is similar to the one responsible of
anemia which requires a sustained biosynthesis of nucleotides at a constant rate, rather than in a burst.
2) Mental retardation is mainly due to the accumulation of nulceotide precursors in the brain of the
newborn, due to the incompletely competent blood-brain barrier.

How can ornithine transaminase defects cause hyperammonemia? Is it due to the accumulation
of ornithine that blocks the urea cycle or for other reasons?
Bellelli: ornithine transaminase is required for the reversible interconversion of ornithine
and proline, and thus participates to both the biosynthesis and degradation of ornithine. The enzyme is
synthesized in the cytoplasm and imported in the mitochondrion. Depending on the metabolic conditions
the deficiency of this enzyme may cause both excess (when degradation would be necessary) or defect
(when biosynthesis would be necessary) of ornithine; in the latter case, the urea cycle slows down. Thus
there is the paradoxical condition in which alternation may occur between episodes of hyperammonemia
and of hyperornithinemia.

When we use the Berthelot's reaction to measure BUN do we also have to
measure the concentration of free ammonia before adding urease?
Bellelli: yes, in principle you should. Berthelot's reaction detects ammonia,
thus one should take two identical volumes of serum, use one to measure free ammonia,
the other to add urease and measure free ammonia plus ammonia released by urea. BUN is
obtained by difference. However, free ammonia in our blood is so much lower than urea that
you may omit the first sample, if you only want to measure BUN.

Why do we have abnormal electrolytes in hematological neoplasia e.g.
leukemia?
Bellelli: I do not have a good explanation for this effect, which may have
multiple causes. However, you should consider two factors: (i) acute leukemias cause a massive
proliferation of leukocytes (or lymphocytes depending on the cell type affected) with a very
shortened lifetime; thus you observe an excess death rate of the neoplastic cells. The dying
cells release in the bloodstream their content, which has an electrolyte composition different
from that of plasma: the cell cytoplasm is rich in K and poor in Na, thus causing hyperkalemia.
(ii) the kidney may be affected by the accumulation of neoplastic white cells or their lytic products.

Gaussian curve: If it is bimodal is it more likely to be a "certain diagnosis" than if it is
unimodal or does it only show the distinguishment from health?
Bellelli an obviously bimodal Gaussian curve indicates that the disease is clearly
separated from health: usually it is a matter of how precise and clear-cut is the definition of the disease.
For example tuberculosis is the disease caused by M. tuberculosis, thus if the culture of the sputum is
positive for this bacterium you have a "certain" diagnosis (caution: the patient may suffer of two diseases,
e.g. tuberculosis and COPD diagnosis of the first does not exclude the second). However, in order to have
a "certain" diagnosis it is not enough that distribution of the parameter is bimodal, it is also required that the
patient's parameter is out of the range of the healthy condition: this is because a distribution can be
bimodal even though it is composed by two Gaussians that present a large overlap, and the patient's
parameter may fall in the overlapping region. Thus, in order to obtain a "certain" diagnosis you need to
consider not only the distribution of the parameter(s) but also the patient's values and the extent of the
overlapping region.

Prof can you please elaborate a bit more on the interhuman variability and its difference
with the interpopulation variability please?
Bellelli: every individual is a unique combination of different alleles of the same genes;
this is the source of interindividual variability. Every population is a group of individuals who intermarry and
share the same gene pool (better: allele pool). Every allele in a population has its own frequency. Two
population may differ because of the diffferent frequencies of the same alleles; in some cases one
population may completely lack some alleles. The number and frequencies of alleles of each gene
determine the variance. If you take two populations and calculate the cumulative interindividual variance
of the population the number you obtain is the sum of two contributions: the interindividual variance within each population, plus the interpopulation variance
between the means of the allele frequencies. For example, there are human population in which the frequency of blood group B is close to 0% and other populati
ons in which it is 30% or more.

Prof can you please explain again the graph you have showed us in class about thromboplastin?
(Y axis=abs X axis= time)
Bellelli: the graph that I crudely sketched in class represented the signal
of the instrument (an absorbance spectrophotometer) used to record the turbidity of the
sample (turbidimetry). The plasma is more or less transparent, before coagulation starts.
When calcium and the tissue factor (or collagen) are added. thrombin is activated and begins
digesting fibrinogen to fibrin; then fibrin aggregates. The macroscopic fibrin aggregates cause
the sample to become turbid, which means it scatters the incident light. The instrument reads
this as a decrease of transmitted light (i.re an increase of the apparent absorbance) and the
time profile of the signal presents an initial lag phase, which is called the protrombin or
thromboplastin time depending on the component which was added to start coagulation
(tissue factor or collagen).

Prof can you please explain the concept you have described in class about
the simultaneous hypercoagulation and hemorrhagic syndrome? How can this occur?
Bellelli: The condition you describe is observed only in the Disseminated
Intravascular Coagulation syndrome. Suppose that the patient experiences an episode of
acute pancreatitis: tripsin and chymotripsin are reabsorbed in the blood and proteolytically
activate coagulation causing an extensive consumption of fibrinogen and other coagulation
factors. Tripsin and chymotripsin also damage the vessel walls and may cause internal
hemorrages, but at that point the consumption of fibrinogen may have been so massive that
not enough is left to form the clot where the vessel has been damaged, causing an internal
hemorrage. Pancreatitis is a very severe, potentially lethal condition, and DIC is only one of
the reasons of its severity.

You said that certain drugs (ethanol, cocaine, cannabis, opiates...) cause a
necessity of higher and higher dosage, for two reasons: the enzyme in the liver is inducible and
the receptors in the brain are expressed less and less. So, first, I am not sure I got it right, and
second I did not understand how expressing less receptors leads to a necessity of higher
dosage.
Bellelli: You got it correctly, but the detailed mechanism of resistance may
vary among different substances, and not all drugs cause adaptation.
The reason why reducing the number of receptors may require an increased dosage of the drug
is as follows: suppose that a certain cell has 10,000 receptors for a drug. When bound to its
agonist/effector, each receptor produces an intracellular second messenger. Suppose that in
order for the cell to respond 1,000 receptors must be activated. The concentration of the
effector required is thus the concentration that produces 10% saturation. You can easily
calculate that this concentration is approximately 1/10 of the equilibrium dissociation constant
of the receptor-effector complex (its Kd), the law being
Fraction bound = [X] / ([X]+Kd)
where [X] is the concentration of the free drug.
After repeated administration, the subject becomes adapted to the drug, and his/her cells
express less receptors, say 5,000. The cell response will in any case require that 1,000
receptors are bound to the effector and activated, but this now represents 20% of the total
receptors, instead of 10%. The drug concentration required is now 1/4 of the Kd.
Continuing administration of the drug further reduces the cell receptors, but the absolute
number of activated receptors required to start the response is constant; thus the fewer
receptors on the cell membrane, the higher the fraction of activated receptors required.

Why does hyperosmolarity happen in type 2 diabetes and not in type 1?
Bellelli: Hyperosmolarity can occur also in type 1 diabetes, albeit
infrequently. The approximate formula for plasma osmolarity is reported in the lecture on
electrolytes:
osmolarity = 2 x (Na+ + K+) + BUN/2.8 + glucose/18
this is expressed in the usual clinical laboratory units (mEq/L for electrolytes, g/dL for non-
electrolytes). The normal values are:
osmolarity = 2 x (135 + 5) + 15/2.8 + 100/18 = 280 + 5.4 + 5.6 = 291 mOsmol/L
Let's imagine a diabetic patient having normal values for electrolytes and BUN, and glycemia=400 mg/dL:
osmolarity = 280 + 5.4 + 22.4 = 307.8 mOsmol/L
The hyperosmolarity in diabetes is mainly due to hyperglycemia, even though other factors
may contribute (e.g. diabetic nefropathy); however the contribution of glucose to osmolarity is
relatively small. As a consequence in order to observe hyperosmolarity the hyperglycemia
should be extremely high; this is more often observed in type 2 than in type 1 diabetes, for
several reasons, the most relevant of which is that in type 1 diabetes all cells are starved of
glucose, and the global reserve of glycogen in the body is impoverished: there is too much
glucose in the blood and too few everywhere else, thus reducing, but not abolishing, the risk of
extreme hyperglycemia. Usually in type 2 diabetes the glycogen reserve in the organism is not
impoverished, thus the risk of extreme hyperglycemia is higher.

Hemostasis and Thrombosis lecture: I don't understand why is sodium citrate
added to the serum solution to measure the prothrombin time.
Bellelli: in order to measure PT or PTT you want to be able to start the
coagulation process at an arbitrary time zero, and measure the increase in turbidity of the
serum sample. To do so you need (i) to prevent spontaneous coagulation with an anticoagulant;
and (ii) to be able to overcome the anticoagulant at your will. Citrate (or oxaloacetate; or EDTA)
has the required characteristics: it chelates calcium, and in this way it prevents coagulation;
but you can revert its effect at your will by adding CaCl2 in excess to the amount
of citrate. You cannot obtain the same effect with other anticoagulants (e.g. heparin) whose
action cannot be easily overcome.

Dear professor I cannot do the self evaluation test because it says the the
time has expired It is not possible because I havent even started them
Bellelli: this is due to the fact that the program registers your name and
matricola number from previous attempts. I shall fix this bug. Meanwhile try to use a fake
matricola number.

How is nephrotic syndrome associated hypoalbuminemia as you described
in methods of analysis of protein because seems counterintuitive
Bellelli: nephrotic syndrome is an autoimmune disease in which the
glomerulus is damaged and the filtration barrier is disrupted; diuresis is normal but there is
loss of proteins (mostly albumin) in the urine.
I m sorry i confused polyurea with hypoalbuminemia but my question still
stands during glomerulonephritis you mentioned something of polyurea as compensation
i could not follow how this compensation mechanism works and collapse after some time in
glomerulonephritis
Bellelli: the condition you describe is NOT characteristic of acute
glomerulonephritis. In glomerulonephritis there is damage of the glomerulus and severely
impaired GFR. Thus the diuresis is severely reduced, and due to impaired filtration proteins
appear in the urine.
The condition you describe corresponds to the initial stage of chronic kidney failure,
usually due to atherosclerosis, diabetes, hypertension or other type of damage of the kidney
tissue. In this case GFR is impaired, albeit to a lesser extent than in glomerulonephritis, and the
excretion of urea is reduced. This leads to increased BUN. However the increased concentration
of urea reduces the ability of the tubuli to reabsorb water, because of osmotic reasons, yielding
compensatory polyuria. The patient has reduced GFR but normal or increased diuresis (urine
volume in 24 hours). To some extent this effect is beneficial, as it favors the elimination of
urea; however it cannot completely solve the problem and in any case the progression of the
disease leads to kidney insufficiency. In its essence the point is that a moderately reduced GFR
can be partially compensated by reduced tubular reabsorption; a severely reduced GFR cannot.

Lecture on Hemogas analysis interpretation of complex cases standard pH
Why if PCO2 is less than 40 mmHg it is absorbed during equilibration? Thank you in advance
Bellelli: if PCO2 of the patient's blood sample is less than 40 mmHg, when
the machine equilibrates with 40 mmHg CO2 the gas is absorbed: i.e. the new PCO2 becomes
40 mmHg and the total CO2 of the sample increases; as CO2 is the acid of the buffer, the
standard pH (in this case) decreases, whereas standard bicarbonate will slightly increase.

Professor I don't understand how we arrive to this formula: Accuracy =
sensitivity x prevalence specificity x (1-prevalence)
Bellelli: ok, this relationship is poorly explained in your text, I shall improve its explanation.
We use the following definitions:
prevalence = sick individuals / total population;
accuracy = (true+ + true-) / total population;
sensitivity = true+ / sick individuals = true+ / total population x prevalence;
specificity = true- / healthy individuals = true- / total population x (1-prevalence);
thus we can rewrite:
sensitivity x prevalence = true+ / total population;
specificity x (1-prevalence) = true- / total population;
accuracy = sensitivity x prevalence + specificity x (1-prevalence)

Why do we use RNA primer in PCR and not DNA primers? I thought the
beginning of the sequence of the gene segment that is going to be formed is made of DNA
Bellelli: DNA polimerases require the RNA primers that are synthesized by
the enzyme primase. DNA primers do not exist in vivo and would not be recognized by DNA
polimerases.



     
Home of this course

Slides of this lecture: