Check out the FAQ,Terms of Service & Disclaimers by clicking the
link. Please register
to be able to post. By viewing this site you are agreeing to our Terms of Service and Acknowledge our Disclaimers.
FluTrackers.com Inc. does not provide medical advice. Information on this web site is collected from various internet resources, and the FluTrackers board of directors makes no warranty to the safety, efficacy, correctness or completeness of the information posted on this site by any author or poster.
The information collated here is for instructional and/or discussion purposes only and is NOT intended to diagnose or treat any disease, illness, or other medical condition. Every individual reader or poster should seek advice from their personal physician/healthcare practitioner before considering or using any interventions that are discussed on this website.
By continuing to access this website you agree to consult your personal physican before using any interventions posted on this website, and you agree to hold harmless FluTrackers.com Inc., the board of directors, the members, and all authors and posters for any effects from use of any medication, supplement, vitamin or other substance, device, intervention, etc. mentioned in posts on this website, or other internet venues referenced in posts on this website.
This has been a fascinating thread to read.
Many emotions, which is just right to get others thinking. As long as everyone stays civil and regardless of opinions treats each other with respect.
With that said, since I am new to FT I'll keep my opinions to a minimum for now.
Perhaps another method to explain the "odds" of a pandemic is to look at history PRIOR to 1918. If we go way back to the first assumed influenza pandemic - 412 - and some probably earlier, an ordinary person can see that they occur with some regularity.
Then, if we take our current "preparedness" - limited antivirals, no VAX, hospitals without much surge capacity, limited PPEs, etc. - and apply our situation to known past pandemics - how would it have changed them?
When you hear that entire towns in Spain were wiped out, how would "modern medicine" change that situation?
I suspect the past pandemics may not have been quite as lethal, but there still would be many deaths. We always compare to 1918, while earlier pandemic were far deadlier (in %, not absolute numbers).
So our "range" of probability is not only 1918, but slightly better than the earlier more lethal pandemics. It's not an exact number, but an "historical reinactment" that may provide a yardstick.
.
"The next major advancement in the health of American people will be determined by what the individual is willing to do for himself"-- John Knowles, Former President of the Rockefeller Foundation
But inexact statements about the likelyhood of a pandemics and warnings
are being made by the experts. That proves that they think, they have
a meaningful estimate which is not complete guesswork.
Yet they don't want to specify that estimate and prefer to leave
us in the dark. That's unethical.
They could tell us, that they are uncertain about the estimate and that
the estimate might change later when they rethink the matter,
but they _should_ give us a number.
And this number should still be better than our own estimate,
since we are not experts.
Having such numbers will provoke others to give their numbers too.
It will provoke discussion about which numbers are more reasonable.
Gamblers and betters and traders will join to express their feelings
about the estimates which could also be useful since many people,
even experts have no good "feeling" about likelyhoods and probabilities,
while OTOH the gamblers have no feeling about the virological
expertise.
So, I'd hope for a useful discussion about the pandemic probability
which will lead politicians and individuals to finally base their
preparations and plannings upon.
No doubt the "experts" do have a valid estimate but they're not going to tell anyone. Why? People will panic. Their numbers will provoke untold stupidity in otherwise docile human beings. In addition, fear is a great controller in our world. Fear has been used successfully throughout history to control the masses.
Prepare for the short term (3 to 4 months) and the long term (4 months to 3 years). If it isn't the pandemic that bags the collective behind it will be some other event.
Last edited by LeaSoul; September 16, 2006, 10:24 PM.
Application of Evidence Theory to Quantify Uncertainty in Forecast of Hurricane Path Svetlana V. Poroseva, Florida State University, Tallahassee, FL; and J. Letschert and M. Y. Hussaini
Results of any computations are of practical use only if information on their accuracy is also available. It is especially true in forecasting hurricane paths where the prediction accuracy is of vital importance. Yet, most current forecasts lack such information. The present study investigates the potential of using evidence theory [1] to provide a quantitative assessment of forecast accuracy and to develop a reliable procedure for combining different forecasts to produce the best possible prediction.
Uncertainties and errors in computational results on hurricane forecasts originate from various sources -- failure of a climate model to describe correctly the atmospheric physics and the interaction between the atmosphere and the ocean; stochastic nature of model parameters; errors associated with the discretization and algorithmic approximations, to mention just a few. Depending on the origin, the uncertainties can be categorized as aleatory (random, stochastic) and epistemic (due to lack of knowledge) uncertainties. In reality, the interaction among the uncertainty sources and the lack of knowledge make it generally impossible to identify and separate the sources to quantify their individual contribution to the total forecast uncertainty. In the present study, we define an appropriate measure to quantify the total uncertainty in forecasts of the given climate model, computer code and grid. Such a measure would allow one not only to compare quantitatively the performance of different climate models, but also evaluate the effectiveness of modifications introduced in a single model.
As both aleatory and epistemic uncertainties are intricately interwoven in a hurricane forecast, one needs a statistical theory that could handle them together to quantify their impact on forecasts. Whereas the probability theory could deal with the aleatory uncertainty and the possibility theory with epistemic uncertainty, the evidence theory [1] provides a systematic framework for such a study, and it is relatively well developed among other related theories. Unfortunately, there are very few practical applications [2] of evidence theory and they differ considerably from the one addressed here. Previously, we developed [3] an approach based on evidence theory to quantify uncertainty in turbulence computations. Results of testing the approach in a turbulent flow encourage us to apply similar approach to hurricane path forecasts with appropriate extension and modification.
Evidence theory provides the necessary tools not only to quantify the forecast uncertainty, but also to fuse the results of different forecasts. The idea of improving the overall credibility of hurricane path predictions by combining results of several forecasts is not new. Multimodel superensemble technique [4] is an example of the successful implementation of the idea.
However, multimodel forecasts like the single model forecasts do not provide information on the forecast accuracy. The present approach provides the quantitative assessment of the forecast accuracy and differs completely from other multimodel techniques in its mathematical foundation.
Briefly stated, the procedure involves first quantifying the uncertainty in results using each of several climate models under controlled conditions, where observation data for hurricane paths are available. Uncertainties in the spatial coordinates describing the hurricane position are quantified separately and assumed to be time dependent. This information is then combined with the results of simulations using each model to forecast a hurricane path for which observational data are not available. For each model at each instant of the forecast, we construct a grid centered on the model prediction of the hurricane position. Each grid interval in latitude and longitude directions is characterized by the degree of support (or belief) that the real hurricane position falls within the interval. The degree of support is the measure of model uncertainty. Its interval values are found during the previous step of the procedure. Then, the results from all models are fused using Dempster's rule of evidence theory to create a new prediction. The final prediction is also characterized by degrees of support. The details of the procedure will be discussed in the final paper.
The database for hurricanes of 1998-2001 in the Pacific Ocean is used to quantify uncertainty in forecasts by global models from two operational centers -- the U.S. Navy Operational Global Atmospheric Prediction System (NOGAPS) and European Centre for Medium-Range Weather Forecasts (ECMRWF). The performance of these two models is compared for each year. We also track the effectiveness of annual modifications in each model. The data for hurricanes of year 2000 are used to evaluate the approach we developed to fuse different forecasts. Observational data for three hurricanes from the South Pacific region, three hurricanes from the East Pacific region, and six hurricanes from the West Pacific region are used solely for the evaluation of the quality of predictions obtained with the new technique. The data for other hurricanes of year 2000 are used to quantify the model uncertainty during the step preceding forecasts fusing.
Authors express their gratitude to Professor T. N. Krishnamurti and Dr. V. Kumar (Department of Meteorology, Florida State University) for providing observational and model data necessary for this study.
References 1. Shafer, G., ?A Mathematical Theory of Evidence,? Princeton, NJ: Princeton University Press, 1976. 2. Oberkampf, W. L., Helton, J. C., and Sentz, K., ?Mathematical Representation of Uncertainty,? AIAA 2001-1645, 2001. 3. Poroseva, S. V., Hussaini, M. Y. & Woodruff, S. L., ?On Improving the Predictive Capability of Turbulence Models Using Evidence Theory?, AIAA-2005-1096, 2005. 4. Williford, C. E., Krishnamurti, T. N., Torres, R. C. et al., ?Real-Time Multimodel Superensemble Forecasts of Atlantic Tropical Systems of 1999,? Monthly Wetaher Rev., v. 131, 2003, pp. 1878-1894. Extended Abstract (484K)
I believe many experts do not commit to a probability figure for a pandemic because it is too difficult to measure accurately. They are in are a difficult situation. They are wrong if they do, and wrong if they do not.
Last edited by sharon sanders; September 16, 2006, 09:53 PM.
Reason: format for readibility
Snowy, I don't understand what point(s) you are trying to make (as usual).
AD, this is different, because it's a global damage. You can't prepare
for everything, you must set priorities.
AD, so ordinary persons can see that pandemics occur with some regularity but scientists cannot ? And if, which of the two groups
should be rely on ?
Which other (flu-)pandemics were far deadlier than 1918 ?
Florida,how to "measure accurately" a probability figure ? It's
an estimate, just how people estimate football games
or share-prices. Experts and others can always be wrong when writing something, but that's no reason to stop writing.
I don't know, how the hurricanes study applies to our situation.
The very first sentence is logically clearly wrong.
from
Richard E. Neustadt, Ernest R.May , thinking in time , the uses
of history for decision makers (1988)
page 152
First: if someone says: "a fair chance" as before the Bay of Pigs
(or a "strong possibility" as with swine flu), or more generally,
"The Guatemalans won't let us keep our camps here," ask,
"If you were a betting man or woman, what odds would you put on that ?"
If others are present ask the same at each, and of yourself too.
Then probe the _differences_:Why ? This is tantamount to seeking
and then arguing assumptions underlying different numbers
placed on a subjective probability assessment.
We know no better way to force clarification of meanings while
exposing hidden differences.
Judging by our students, millions of Americans are either
unfamiliar with numerical notations of probability or are
uncomfortable at the idea of stating a subjective judgement
in such terms.The doctors of the swine flu case,for instance,
recoiled from it, at least outside their own ranks.
On odds of a pandemic they refused to give Ford any number
beyond " 1 to 99 percent". [12] But we have yet to find
a fellow citizen unused of placing bets and giving odds
on races or at games. So play it as a game and ask for odds
on each presumption.If the doctors or their fellow experts
hesitate we offer the suggestion of one accademic collegue
with extensive government experience: Ask instead, "When
I brief the press, with you by my side as an expert, and
I tell them the odds are X, will I be right ? No ?
Then how about Y ?" And so forth.
Once differing odds have been quoted, the question "why ?"
can follow any number of tracks.Argument may pit common sense
against common sense or analogy against analogy.
What is important is that the expert's basis for linking
"if" with "then" gets exposed in the hearing of other
experts before the lay official has to say yes or no.
A variant on giving odds is placing bets in the sense
of a challenge:"How much of your own money would you
wager that the presumed thing actually happens ?"
Hidden differences are flagged by different sums.
[12] : Richard E. Neustadt, Harvey V. Fineberg , The Epidemic That Never Was (1983)
Estimating The Odds Of A 1976 Pandemic
1.) CDC formed two panels of experts to estimate the probability of a pandemic
in 1976 result :
---- 10%-25%
---- 40%
2) the members of the Advisory Committee (March 10,1976) later said they had assigned
privately nonpublic probabilities for a 1976 pandemic of
---- 2%-20%
3) departmental and White House aides assigned to that event probabilities of
---- 20%-50%
("the higher the level, the fewer the doubts")
from Richard E. Neustadt, Ernest R.May , thinking in time , the uses of history for decision makers (1988)
page 152 ,153
Our second test is "Alexander's question". One of us coined this term for it in another book,
and we might as well stick with the label [13].
The Alexander named is not Aristotele's pupil, the great conqueror, but merely the man who asked
the question in March 1976 at the Advisory Committee meeting that preceeded the decisions
to immunize the country against swine flu. Dr. Russell Alexander [a], a public health professor
at the University of Washington, wished to know what fresh data from anywhere, including
the Southern Hemisphere, could cause his collegues to revise or to reverse their judgement
that the country should get ready to be immunized _en masse_ starting next summer.
Mild outbreaks only ? None ? Time-frames ? Locations ? He asked but never got an answer.
In the circumstances it was the right question, we believe;pursuing it it would
have flushed out a set of deeper questions, which also did not get asked: questions
about tradeoffs between side effects and flu, questions about programming and scheduling
review, questions distinguishing severety from spread, questions about stockpiling, and more.
In retrospect they all deserved a thorough airing.This they did not get. Still the right
initial question _was_ asked.We draw our term from that.
Adapting but slightly we put the emphasis on presumptions rather than conclusions, and urge
the comparable questions:What new _Knowns_ would bring you to change items _Presumed_ ?
When ? And why ? Passed around the circle, those should sharpen differences, spur debate,
and force out inferences, whether of courses or of values. The counterpart question did
not do that for Alexander.That is because the director of the CDC [b] chairing the meeting where
Alexander asked it chose not to pursue it. We recommend hot pursuit.
[13] : Richard E. Neustadt, Harvey V. Fineberg , The Epidemic That Never Was (1983)
keep asking the experts for their probability estimates of a pandemic
and their expectation values of panflu-deaths.
Both in the next -say- 5 years.
And report the answers here !
In the aftermath of the next pandemic, what do you think will be considered the biggest mistake in pandemic preparations ?
Quote by Prof. Steele speaking on the forecasting of pandemics:
"I believe that we already possess a rich enough scientific base for the probability estimation problem to be moved beyond the simple subjective probability estimates of individuals."
I disagree. The amount of mathematical modeling required to forecast the probability of a pandemic would be immense.
Some of the factors that would need equations:
avian infection in the majority of the world's geography,
endemic infection in the environment in South East Asia,
domestic bird infection in many countries,
amount of possible vectors - birds, poultry, feces, mammals, etc.
life of organism outside of the host,
genetics changes possible to virus,
human to human transmission rates,
rate of change in genetic structure of the virus,
etc. etc. etc.
In addition, it seems that many of the "experts" can not agree if recombination is a factor, or not. This makes the equations needed to calculate genetic changes flawed.
The best estimate we going to get is the cyclical nature of history and the level of concern that various experts have that this virus may become sustained human to human.
People ask me what the chances are that this virus will become pandemic and the truthful answer is that I do not know.
But it is possible - why not prepare? This preparation will be useful for the other calamities that happen on a regular basis: earthquakes, hurricanes, fires, job loss, etc.
>Quote by Prof. Steele speaking on the forecasting of pandemics:
>
>"I believe that we already possess a rich enough scientific base
> for the probability estimation problem to be moved beyond the
> simple subjective probability estimates of individuals."
>
>I disagree. The amount of mathematical modeling required to forecast
>the probability of a pandemic would be immense.
that was an unexact "enough".."beyond" statement by Steele. No reason to
disagree. It's just a matter of interpretation.
Any collection of individual probabilities and forming the average
might already fulfill.
A complete mathematical modeling is not meant. Just -maybe-
some mathematical tools or calculations on subproblems
or similar problems to aide building the feeling for a good estimate.
Different experts will still give different estimates,
even when they are very well informed on all available data.
>Some of the factors that would need equations:
>
>avian infection in the majority of the world's geography,
>endemic infection in the environment in South East Asia,
>domestic bird infection in many countries,
>amount of possible vectors - birds, poultry, feces, mammals, etc.
>life of organism outside of the host,
>genetics changes possible to virus,
>human to human transmission rates,
>rate of change in genetic structure of the virus,
>etc. etc. etc.
yes, lots of factors. You should evaluate them all and assign
probabilities to them on paper and then calculate the overall
probability by combining the sub-probabilities. We usually develope some feeling for this
without doing the intermediate,helpful steps on the subproblems and different aspects.
>In addition, it seems that many of the "experts" can not agree
>if recombination is a factor, or not.
most seem to think that it is not so important
>This makes the equations needed to calculate genetic changes flawed.
estimates will vary. Take the average.
>The best estimate we going to get is the cyclical nature
>of history and the level of concern that various experts
>have that this virus may become sustained human to human.
you may formulate it this way (level of concern --> probability estimate)
>People ask me what the chances are that this virus will become
>pandemic and the truthful answer is that I do not know.
no. You are not certain, but you do know. If you had to give a number,
you would. If you had more time, you might give another number, though.
But you know more about the problem than nothing.
You would probably agree that it's >1% and <99% . That's still
something and better than "I do not know".
Your number would still probably be informative for others
despite your uncertainety.
>But it is possible - why not prepare? This preparation will be
>useful for the other calamities that happen on a regular basis:
>earthquakes, hurricanes, fires, job loss, etc.
maybe, maybe not. That's another question.
And the question is not whether to prepare or whether not to
prepare, but how much to prepare.
I wish there was a way to predict, in a definitive way, the probability of a pandemic, but there is not.
Any number is suspect.
This is what I do know 100%:
1) There will be a viral flu type pandemic,
2) H5N1 has a possibility to be that flu pandemic,
3) Many esteemed experts from around the world have been concerned since 2003 that this virus will be the next pandemic virus,
4) The U.S. government has recommended that all citizens obtain a supply of food and water for their homes,
5) The U.S. government has issued pandemic plans and updates,
6) The U.S. government has conducted a series of conferences around the country for a year to alert business and citizens about the potential pandemic,
7) H5N1 is spreading worldwide via wild birds and importation of infected birds,
8) There have been cases of limited H2H transmission,
9) H5N1 is endemic in the environment in many countries in South East Asia,
10) The case fatality rate remains high for cases that are documented.
Each person must evaluate the above and determine for themselves what the risk factor is for H5N1 to become the next pandemic.
I personally think it is possible and that everyone should obtain at least 8 weeks of essential supplies in their homes. This is reasonable.
you can always say : "I do not know". But that's not very informative.
We could stop communication altogether.
>I wish there was a way to predict, in a definitive way, the probability
>of a pandemic, but there is not.
not to predict, but to give a subjective estimate. This is just a tool to
communicate your feelings. Don't expect too much from it.
>Any number is suspect.
but some are suspecter ;-)
>This is what I do know 100%:
>
>1) There will be a viral flu type pandemic,
not 100%
>2) H5N1 has a possibility to be that flu pandemic,
>3) Many esteemed experts from around the world have been
> concerned since 2003 that this virus will be the next pandemic virus,
>4) The U.S. government has recommended that all citizens
> obtain a supply of food and water for their homes,
not all citizens AFAIK, but I'm not sure.
However other countries haven't done this. Why do you think the
US-government is more credible than the governments of other
countries with similar conditions ?
>5) The U.S. government has issued pandemic plans and updates,
>6) The U.S. government has conducted a series of conferences
> around the country for a year to alert business and citizens
> about the potential pandemic,
>7) H5N1 is spreading worldwide via wild birds and importation of
> infected birds,
probably, but not 100&
>8) There have been cases of limited H2H transmission,
some people still doubt this
>9) H5N1 is endemic in the environment in many countries in South East Asia,
however you define this
>10) The case fatality rate remains high for cases that are documented.
you only listed negatives but left out all positives.
>Each person must evaluate the above and determine for themselves
>what the risk factor is for H5N1 to become the next pandemic.
they can't. They are no experts. Others are much more qualified
to estimate this and are much better informed.
I'm not comfortable that all the laymen make their own estimates
based on lists like your one above.
>I personally think it is possible and that everyone should
>obtain at least 8 weeks of essential supplies in their homes.
>This is reasonable.
there is a contradiction. When you have no clue about the probability
of a pandemic -as suggested with "I don't know"- then you can't
know whether that recommendation is reasonable.
Snowy, what I meant is, that when you know afterwards that there was a pandemic,
then you can easily argue that we had been underprepared.
The same in the opposite direction with being overprepared for a non-pandemic as in 1976.
But how can we estimate now the likelyhood and possible impact of a pandemic ?
That's the decisive question when we want to decide how much money to spend
on preparations. You can still make mistakes with how to preparate and where
to set the priorities, as you pointed out.
Only when we estimate this probability poorly by overlooking some facts
and make wrong decisions based on that estimate, then we can be
reasonably accused later.
I said "I personally think it is possible", therefore, based on my personal judgement, 8 weeks of essentials in the home that can be used for any emergency situation, is reasonable.
You can not argue with my personal opinion. It is subjective and my evaluation of the situation.
I know that I am only a female and not capable of understanding probabilities, but I think it is a high probability that you have too much time on your hands.
I don't think, the time is wasted. This is important.
You seem to have more clear estimates than you want to admit then.
Please give your number to communicate your level of
prepping necessity in a better way to us.
Comment