: How can I mention errors in the data that I received in my thesis? Where to mention it? I received data for data analysis for my Bachelor-Thesis. I still got 4 weeks left to finish the 40
I received data for data analysis for my Bachelor-Thesis. I still got 4 weeks left to finish the 40 pages and after having gotten all my results, my supervisor and I realized that the data that I received had a conversion factor error (we don't even know about the magnitude of this factor-is it 3/4 times higher? etc). Anyways, I wanted to mention this issue in my thesis but I am not sure 1) where to mention it - methods? results? and...2)how to talk about it? Thank you all!
More posts by @Lee1909368
: What should appear in the List of Acronyms? In a scientific publication (e.g., textbook, report, or thesis), it is a best practice to spell out the acronym at the very first occurrence in the
: Writing part word If I'm writing a dialog and my character get interrupted and cut off in the middle of a sentence can I write half a word and put three dots? I thought it was okay to do
3 Comments
Sorted by latest first Latest Oldest Best
With four weeks still to go, the emphasis on your thesis has shifted from one of presentation to one of finding out what went wrong. This will lead you to three probable outcomes :
1) You discover exactly what went wrong, and it's a simple factor with no effect on the measurements. A note in the method or at the beginning of the results (eg. "Measured values in mph converted to m/s") would be sufficient.
2) You discover what went wrong, and there could have been an effect on the implications of the results (eg. "Variations in volumetric flow had not considered a change in viscosity with an increase in temperature."). If there's still time to retest (on a smaller sample if necessary) this would be the best way to go, if not your investigation should be reported - the process of discovering the problem and your resolution of it has become part of the thesis, and should be reflected in the method and results. Your conclusions might be no different, or you may have to report inconclusive results.
3) You are unable to discover or quantify what went wrong. You would now be writing a very different thesis - a report on the problems encountered rather than something leading to a demonstrable conclusion. You can still display valid scientific techniques - this has become the object of the thesis rather than what you originally planned, and your conclusions (if any) should reflect that.
The investigation into what happened is the important thing now, as the way you present your observations will depend on what you find. Whichever way it goes there is an outcome where you can present valid scientific observations, but the way you present them will be different.
[I'm also going to agree with Amadeus's suggestion of applying possible factors iteratively - if you can discover the variation, this may give you an idea of what might have caused it - for example if you keep running into the numbers 9.81 or 3.142, you could guess at SI gravitational acceleration or confusion of circumference/diameter.]
It's worth remembering that as long as a consistent and repeatable process has been followed and reported, results that are inconclusive or which appear to contradict a hypothesis are valid results.
Do not publish ANYTHING you know is untrue, or even suspect is untrue.
I am a PhD, a research scientist and former college professor.
You are just in trouble. You cannot publish conclusions that do not hold if the data is in error, you will be publishing a known falsehood.
Your best bet is to rescale the data by some amount, say a factor of 10, or convert mm to inches or vice versa, or Fahrenheit to Centigrade, and see if your same conclusions hold. If the number is arbitrary, try several, like [.25, .5, 2, 5, 10, 50].
If they all give the exact same results, you might be able to say (very early, like at the end of your introduction) that your data was found to have a scaling error of unknown magnitude, but your conclusions held when the data was rescaled by several different magnitudes [.25, .5, 2, 5, 10, 50], thus there is reason to believe the results are scale-invariant.
However, if these experiments do NOT give the same results, you should search for how big or how small the scaling factor can be to get the SAME results, and report that. Test in 10% increments; e.g. [0.10, 0.20, ..., 0.90] for how small, and in larger increments [1.25, 1.50, 2.0, 2.50, 3.0, 3.50, 4.0, 5.0, 7.0, 10.0].
Then you can say (very early) that a scaling error of unknown magnitude was discovered in the data after the completion of the study, but your results hold if the data is rescaled by a factor in [0.25, 5.0]. THAT IS AN EXAMPLE, you will have to find the upper and lower bounds yourself.
If, analytically, your reasoning is relative, (for example, saying "less than 10% of the samples met condition X" or saying "These samples were more than 3 times the magnitude of those samples") then a constant scaling factor will not change the logic of relative statements.
You should examine your paper and see which statements are relative and which are NOT. For example, if you thought temperature was in Fahrenheit and said a temperature of 20 was below freezing (for water), and then discover temperatures are in Centigrade, well 20C is 68F, nowhere near freezing, and that logic and what follows from it just has to be revised or deleted.
It's been a while since I wrote a scientific thesis, so apologies, but this sounds like it would be discussed in "Results" section to the reasons why your data was returning bad values and what misstep in your methods you can attribute to that results. The Methods section is listing the steps you took to get the data returned, so it should not be relevant to change, provided you acknowledge your mistake in the results section.
You should also have a conclusion section where you should right your initial conclusion and follow up with a revised conclusion based on the problematic return. Best form is to draw all possible conclusions you can with the data that was not returned in an erroneous manner to the best you can, then acknowledge what cannot be concluded because of the bad data, and note that the modification that could correct this error in your testing. Science is as much about proving what you know as it is acknowledging what was not proven and why.
In the event that the entire data returned invalidates any conclusion. Start with admitting the problem, and showing what steps were not taken that lead to this situation. You may be able to get away with discussing initial conclusions, but be mindful that this is inconclusive without the right data set. If the conclusion is so off it is absurd, discuss the logical reasons of why this cannot be accepted as a valid possibility.
Terms of Use Privacy policy Contact About Cancellation policy © selfpublishingguru.com2024 All Rights reserved.