Thursday, September 18, 2008

 

What the Canadian UFO Survey is all about

Against my better judgment, I engaged in a debate on UFO Updates defending the results of our Canadian UFO Survey. In short, I don't think it needs defending, but some people on Updates think that our finding that only a few per cent of UFO cases that get reported in Canada each year are high-quality Unknowns. The feeling from some people is that our method is flawed because such a low percentage is what debunking exercises have found (Condon, Blue Book, etc.) and therefore we must be wrong. The percentage of Unknowns must be higher, they insist, of the order of 25 to 50 per cent. Well, it's not. And here is the latest exchange:

>>>Date: Sat, 13 Sep 2008 04:15:00 EDT
>>>Subject: Re: Colorado Project Had More Than 50%

>>>What percentage of _all_ high quality cases are Unknowns? You
>>>don't tell us. Your UFO Survey doesn't tell us.

>>Sure it does. The data is right there. That's why we provide it;
>>to allow anyone to do more analysis.

>No you don't. You provide _some_ of the _data_ (not for 1989 to
>1993) but you don't tally up the High Quality case _statistics_

You're right. Geoff didn't do all possible tallies and combinations. Tsk.

>data. The statistics data (the word "data" is plural by the way)

Please. Random House Dictionary says that data is a singular noun
meaning "body of information." You've just gone beyond
objective criticism into personal attack. Great.

>are not there. You don't provide it, as apparently you expect
>people to plow through 7,600 cases to find the High Quality
>cases, and tabulate statistics on them because you are not that
>interested in High Quality cases.

Yes, well, we do expect serious researchers to do some work for
themselves, too. But we can work on this in future studies.
As a matter of fact, we'll be looking at this in the 20-year study
which we will publish next year.

>You only mention one _single_ year with a figure for high-
>quality Unknowns (2007), which I had already mentioned in my
>posting. What did you think I was talking about that was
>different from that ?? Never mind, that will obviously get
>nowhere.

True.

>What percentage of _all_ High Quality cases in the Canadian UFO
>survey from 1989 to date are Unknowns? You can't provide that
>figure because your Survey does not tabulate the data on case
>quality online, whereas you insinuate that "anyone" can just go
>and look it up. There is no table with the 1989 to 2007 data
>quality stats.

The entire 1989 to present database is not available online at this
time. I've noted elsewhere that I had to actually go back and
enter cases from the 1980s and and early 1990s again manually
because the earlier databases are not compatible. Silly dBase
and Quattro Pro.

As for anyone using the data online, as a test a few minutes ago,
I just imported an earlier year's data in HTML into Excel with a
click of the mouse, then ran a sort on it. It works, and found about
a third of the higher-quality cases for that year were unknowns too.

Geoff is working on getting the 7000+ cases into a single file.
Apparently, he is having trouble finding time to do this because he
has a life.

>As far as I can tell 2005 was the first Survey to give the
>High-Quality Unknowns percentage, so we only have that figure
>for 2005 (7%), 2006 (less than 1%) and (2007 (less than 1%). You
>give no explanation for these figures further on in your
>posting, where you come up with 30% to 31.6%.

I just checked the 2003 data to see if this is true. It's not.
From the 2003 Survey:

"There were 111 Unknowns out of 673 total cases in 2003.
If we look only at the Unknowns with a Reliability rating of 7
or greater, we are left with 28 high-quality Unknowns in 2003
(about four per cent of the total)."

>Why didn't you explain this huge discrepancy? As I had said in
>my posting, your survey was claiming less than 1% High-Quality
>Unknowns. Now you say 30%. It's your Survey, please explain it!

There is no discrepancy. The percentage of Unknowns with high
Reliability ratings remains only a few percent of the total data.
The 30 per cent (or so) comes out when we eliminate all low-reliability
cases and only look at higher-quality cases.

Incidentally, I have just got around to reading Ann Druffel's excellent
book on James McDonald. In the first few pages, she notes that in his own
study of cases over five years in his immediate area, he found that only a
few percent of the UFOs reported were Unexplained. Maybe take up your
argument with him, too.

>I'm sorry but this just looks like gobbledygook manipulation of
>statistics.

I'm sorry you feel that way.

>You still don't explain why your Unknowns percentage is so much
>lower than the Condon Report's.

Because they didn't look at raw UFO data?

>Why aren't you doing a better
>job than the Colorado Project of the 1960's on IFO's vs. UFO's?

You tell me. Oh wait, you did:

>Well there is a reason, as I have stated here on UFO UpDates any
>number of times. You deliberately seek out and inflate your
>stats with IFO cases. Here is what you say in the 2007 Survey,
>that in the first few years of the Survey you had essentially no
>IFO's at all, so then you decided you wanted IFO's for some
>strange reason:

>"Contributors were then encouraged to submit data on all UFO
>reports they received [meaning IFO's], so that a more uniform
>assessment and evaluation process could be realized."

>So you deliberately sought out IFO reports to pad and line and
>artificially inflate your statistics.

If that's how you want to interpret that. Our view is that we realized
we weren't getting all the data from investigators. Some were not
bothering to contribute the IFOs they were dealing with. That meant
we would be artificially inflating (your phrase) our percentage of
Unknowns in the broader database if we didn't include all cases
reported as UFOs (but later explained as IFOs). The original purpose
of the study was to see what was being reported as UFOs, not to just
look at Unknowns. (That's a separate research project.)

>This is how you managed to accumulate 7,600 mostly Poor Quality
>cases.

GIGO
Most UFO case reports are not of high-quality, unfortunately.

>You don't do any
>legitimate scientific control studies of the IFO case sample
>with the UFO case sample (I shudder to think how you would
>contaminate each sample with indeterminate, insufficient data
>cases).

I'll leave this for Geoff to answer.

>Judging from your posted comments (below) on 2007, as a
>guess I would say you have less than 1,000 High Quality cases
>out of about 7,600 cases since 1989, so the remaining 6,600+
>would be Poor Quality.

Maybe.

>You should get the Poor Quality cases out of your statistics as
>Hynek urged so many years ago (1976 in The Hynek UFO Report, p.
>259), saying:
>"Insufficient Information cases should [be] excluded from
>statistical computations altogether."

Perhaps. But our Survey was originally designed to find out what
was actually being reported in terms of UFO sightings. It does that very
well. I have no problem with the fact that the analysis of the data is
something that needs debate and discussion. That's why we make
the data available for anyone to play with and why I even bother
with defending it here.

>That does _not_ mean burn them, destroy them, or shred them. The
>Poor Quality reports remain in the files, just should not be in
>the statistics.

I disagree.

>There are many indicators of the poor quality of the vast
>majority of your 7,600 reports. You have almost 40 reports where
>you don't even know the Province! How are those valid cases and
>not garbage reports with Insufficient Data? You have about 50
>reports without even the Month known! God only knows how many
>cases have the Month but are missing the Date.
>You have over 100 cases where you can't even make a Modified
>Hynek Classification of the events! How is that even possible?

It's possible because many people report their sightings to UFO
organizations and don't list or can't recall the date or time. As for
no information on the province, I know of several instances where
the source was a government or military file that listed date, time
and other data, but the actual location was blacked out or
unavailable to us. Should those cases not have been included?

My view is that cases with missing data should be included but
noted as Insufficient Data in an analysis because not including
them would lower the number of reports actually made by
witnesses in a tabulation, suggesting that fewer were reported
than actually were.

>So for 2007, we find that almost 100 of the 836 cases have no
>times or hours. Almost half these cases have no time Durations
>(about 366 of them)! Your Survey claims that "The duration of a
>sighting is one of the biggest clues to its explanation." Well
>if you don't have a Duration then you have by your own admission
>Insufficient Information for "clues to ... explanation."

Indeed.

>You have 436 single-witness cases out of 836, or over half of
>your 2007 data set, another strong indicator of Poor Quality
>cases. Hynek would eliminate virtually all single-witness cases
>unless there were good scientific reasons for retaining them.

You'd eliminate single witness cases as not being worthy of
including in a statistical study on UFOs?

Can others on the list comment on this please?

>Some 382 of the 836 reports are Point Source cases, another
>strong indicator of Poor Quality. Over 100 additional cases do
>not even have a Shape reported! How is it even possible to have
>no Shape?? What on earth could be reported without a Shape??
>What is being reported in those cases?? What kind of "UFO" or
"IFO" has no Shape?? Why do you have such reports in the first
>place?

e.g. Witness: "I saw a UFO flying along the horizon, about half a
mile away. Its light was too bright to see any shape."

>You don't even have an Angular Size category of data. Angular
>Size and Duration are the two primary factors that determine the
>amount of visual data a human witness receives from a sighting
>and you don't even collect the Angular Size data!

Please understand; we don't collect angular size data
because we get the data from many different sources, most
of which don't gather angular size data themselves. Heck, I can't
even recall a recent military report where angular size was included.
And angular size isn't something found in most of Peter
Davenport's case files, for example.

Would it be nice to have? Absolutely.

>And you also shamelessly inflate your statistics with astronomer
>reports of meteor fireballs, on the flimsy excuse that you need
>them for explaining UFO reports. You never answer why you need
>to contaminate the _statistics_ with meteor reports, when you
>can simply keep the meteor reports on file. Keeping the meteor
>reports on file for investigative purposes does not force you to
>add them to your statistics.

We only did this briefly, after discussing it for awhile. We had
found that some "meteor" reports in astronomical databases
were not in fact meteors, since some lasted for many minutes
or even hours. They may have been rightly called UFOs, but were
labelled as IFOs by astronomers because they couldn't possibly
have been UFOs. We have not included MIAC reports for some time.

>>When considering only higher-quality Unknowns (using a
>>Reliability level of 7 or more), here's the breakdown for 2007:
>>Reliability Number Unknowns Percent
>> 7 68 21 30
>> 8 24 8 33.3
>> 9 3 1 33.3
>>Total 95 30 31.6
>>So in this analysis, Unknowns comprise about 30% of the high-
>>quality cases. I gather this is more like you're wanting it to be.

>You don't explain why your 2006 and 2007 Surveys claim the High
>Quality Unknowns were less than 1% of High Quality reports.

Oy.

>The Unknowns do not "stand out" if they are less than 1% of them!!

Ok.

So here?s the thing:

We spend many, many hours of our spare time mining the net and also poring
through UFO case reports to see what is being reported in Canada. We are
unfunded and do this when we can and with limited technology. We evaluate the
information that is available, enter the information into a database and
do some simple statistics on the data. We report on the percentages and
numbers of what we found, noting demographics and distributions across
time and space. The resulting published report is presented for
discussion and for further research, the data being made available in a
table each year.

What ufologists do with the information, data and study is up to them.

My vision is to create a kind of "Robertson Panel" (the "Rutkowski
Panel"?) where funds could be made available to fly in about a dozen
ufologists, non-ufologists and scientists to spend a few weeks
together each looking at all reports of UFOs from a given month or year
and assessing their quality, then evaluating them objectively and coming
to a consensus. My guess is that they would find the same distribution we
get doing the Canadian UFO Survey, but perhaps they wouldn't. It would be
an interesting exercise.

I think ufology needs more good case investigation, too. Once an
interesting case gets identified as such, it would be nice to pour
considerable time and money into in-depth investigation. Even if there
are only a few dozen such cases a year, doing exhaustive work on them
might yield valuable information. Stephenville and the O'hare cases were
a good start.
Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?