Example of Bad Detector Pathology

I didn't get satisfactory result from Scalepack.

I would check following:

  • Beam line
  • Shutter
  • Detector
  • Crystal size, radiation sensitivity, mosaicity etc.
  • Consistency of indexing if data from more than one crystals are used in some space groups
  • Crystal isomorphism

 

Comments: It is now clear that we do have shutter problems. We have only recently discovered that ours apparently either opens late, or closes intermittently.

I recently scaled a data set that had good Rmerge values but high normalized c2 values in Scalepack. Is there something wrong or should I ignore the c2 values? I know that I can boost the s values in Scalepack with the error scale factor. I was just wondering whether it could be due to any other parameter in Denzo. For example incorrect background definition or box size or anything like that. Our spots are very elongated but data is strong and c2 values from Denzo appear to be good. I know that other data sets give values in the range 1-2 but this one is in the range 3-4. Does it get harder when the spot elliptical is more asymmetric?

Reprocessing withDenzorarely improves c2 dramatically , unless you correct a major mistake in the processing.

 

I am trying to scale some data which processed very nicely from 30cm mar at 9.6. I am trying to see why it doesn't scale well. If I change the value of error scale factor should I try to get the final c2 column at the bottom of the output of Scalepack to be 1.## or should they be equal to the value of the error scale factor I give?

At the moment it looks like:

Shell
limit
Lower  Upper

Average I

Average
error
Norm.  Linear Square
Angstrom

stat.

Chi**2 R-fac  R-fac
  100.00 4.31 205113.5 17423.5 13959.9 4.838 0.087 0.155
  4.31 3.42 140289.3 9859.9 7050.6 5.088 0.077 0.127
  3.42 2.99 59864.0 4943.1 3835.6 5.593 0.092 0.174
  2.99 2.71 31249.4 3211.6 2690.9 4.465 0.101 0.180
  2.71 2.52 21540.4 2658.0 2326.9 3.501 0.113 0.172
  2.52 2.37 16115.9 2358.6 2132.3 2.615 0.120 0.156
  2.37 2.25 13138.6 2214.7 2044.7 2.435 0.132 0.182
  2.25 2.15 10403.3 2107.3 1985.4 2.597 0.163 0.228
  2.15 2.07 7984.0 2084.2 2001.9 1.883 0.191 0.241
  2.07 2.00 6503.1 2127.2 2059.1 1.790 0.226 0.290
All reflections 52388.8 4959.8 4047.5 3.657 0.092 0.149

with error scale factor 3.0 and estimated error 0.06. Things are pretty bad.

I'd try with Scalepack with e.g. estimated error 0.10 (usually I use 0.06) and rejection probability 0.01. A value for error scale factor? Usually about 1.5 seems quite good, but I guess in this case I should experiment a bit - but if I need to have error scale factor greater than say 2.0 then I know that it still isn't right?

 

I tried scaling again with error scale factor 1.5 estimated error 0.10 rejection probability 0.1. This time the scaling statistics look like...

Shell
limit
Lower Upper Average I  Average error Norm. Linear Square
Angstrom  stat. Chi**2 R-fac R-fac
  100.00 4.31  207375.1 17638.1 6884.6 4.242 0.088 0.136
  4.31 3.42 141622.7 11636.3 3527.6 4.565 0.080 0.115
  3.42  2.99 60834.7 5307.6 1921.0 4.949 0.091 0.145
  2.99 2.71 31748.7 3013.8 1348.5 4.555 0.099 0.146
  2.71 2.52 21800.0 2247.5 1161.2  4.118 0.109 0.146
  2.52 2.37 16230.9 1822.3 1063.3  3.710 0.117 0.141
  2.37 2.25 13217.2 1602.3 1020.6 3.642 0.128 0.160
  2.25  2.15 10430.2 1422.9 988.6 4.094 0.157 0.206
  2.15 2.07 8010.3 1310.4 1000.5 3.451 0.184 0.220
  2.07 2.00 6584.4 1296.2 1033.9 3.515 0.220 0.268
All reflections  52826.6 4803.3 2009.7 4.142 0.093 0.131

numbers Any suggestions as to how to make this a bit better? At least now all the c2 are similar - I know that I can compensate for these by increasing error scale factor, but is there a better way? What exactly is the scanner doing to give such funny data? Elspeth thought that there was a problem with the nonlinearity of the detector above a certain value, which is why I tried using overload value in Denzo to try and get rid of rubbish data!

I’ve fiddled around with all my data, and it doesn’t scale well no matter what I do. It’s a shame, because my data is 96% complete to 2.1Å. I think though because it doesn’t scale I will not use the data, and instead collect it all again. It seems like a waste, but I think it is safer to start off with good data with good scaling etc. before making maps and drawing conclusions from their interpretation. Do you think I should give up with the data?

No other choice. It seems you got screwed up.