January 28, 2008 at 8:38 am #9008MRSParticipant
I’m working with fluorescent (6-FAM) AFLP data and I’m having a lot of problem. Everything is OK when I analyse data from one experiment. But when I try to analyze together data from samples processed in different time, the best result I got, till is clustering of the data based on date of analysis…. Do you have any suggestion?
February 2, 2008 at 1:45 pm #81355blcr11Participant
Warning. Someone who doesn’t do these things may be about to make a bonehead suggestion.
I don’t really do these kind of things, so if this is nonsense, just ignore it. It seems that we crystallographers have a similar problem when we process data and/or have to compare x-ray intensities between different crystals of the same thing. When we process data, we have to integrate the intensity of spots that may occur in one to several frames of data, but the experimental numbers vary with time because of a number of factors, including the property of the x-ray beam, the exposure history of the crystal, the path-length of x-rays through the crystal, and so on. Our “fix” is to scale all the frames together such that the average intensity for all the frames is as close to the same as we can make it. Each frame of data is multiplied by a scale and temperature factor determined by the data to place all the frames on the same intensity scale. Now the temperature factor may not make any sense for your data, but the scale might.
So, in your case, you want to have your fluorescence yields to be comparable for each of the separate trials. If all your fluorescence yields are being normalized already, this may not be relevant, but it sounds like the yields may be different enough to be causing problems. Can you pick one of your trials as the “reference” fluorescence and scale all the other experiments to it. Then the fluorescence readout for each trial will be multiplied by a scale factor determined as the ratio of the average F of the experiment divided by the average F of the reference data. Maybe that way all the data will be on the same scale and you won’t see so much intra-group correlations.
It works well for x-ray data anyway.
February 3, 2008 at 3:24 am #81361canalonParticipant
My experience with PCR typing is that to get good normalisation you need to run controls to be able to compare different gels/experience:
– a size control in case the separation condition differes from experiment to experiment. A molecular size marker will be enough for that.
– an experiment control, i.e. one sample that is run at each experiment with all others that will serve as reference for the labelling amplification conditions. It can also be used to normalize size.
The first normalization was easy with the software we used (Bionumerics). The second was possible if I remember correctly but I did not need it, and it would be impossible for me to remeber how to do that.
- You must be logged in to reply to this topic.