easier to use the R on the probe tip
Or perhaps just use a 100 M Ohm R in series with the probe tip this would insure any added resistive ladder does not have an effect on the circuit (L and C -based and other "balanced" or resonant circuits can be amazingly skewed by even a few ohms sometimes).
If your probe is now a "x1", it is likely to provide a total Input Impedance of 1 megohms (industry standard for most 'scopes). Regular "stand-alone" scope x1 probes have almost no DC resistance (the input impedance being provided internally, with only a filter cap for "trim" in the better ones). PC-based scopes usually have most of the impedance in the probe even for a "x1". If the probe is a "x10" it then it likely has 9 or 10 megs of impedance depending on it's above type (you can "Ohm" it with a DMM).
Putting the 100 M Ohm R on the tip of a "x1" probe will decease the measured values by a factor of about 99 (or for 30 kV, to roughly 300 V... Plus or minus the error percentage of the R itself). Luckily precise measurements are usually not required for these HV applications
The reason "real" HV probes are so expensive is they also have built in circuit protection in them besides the added resistance and special heavy-duty high-density insulation to keep the fingers away from shock and arc dangers. But they also often lower the analog Frequency Response of the whole system quite a bit (from the rating of the scope to around only 50 kHz in many cases). So if considering buying one; check that spec too
The home-made version will affect the analog band width of the scope slightly too, but not that badly. Where this could be of prime importance is when specific "fast" waveform shapes are being examined and compared, and/or when transient spikes are being looked for. But as long as all measurements are done with the same scope & probe arrangement, then it should work for comparison purposes.
Also regarding Frequency Response, remember that digital scopes (of which PC-based ones and the new small portable ones are all) have another important factor, that of Sample Rate. You need at least 10 samples per cycle to get an accurate waveform representation, or you risk "Aliasing". So if looking for spikes, the period of the fastest one you expect to see converted to Hz then multiplied by 10 is what you could use for the sample rate setting (a "1 mS" spike equals "1kHz" f, for a single-channel sample rate of "10k"). Of course more than 10 times is ok too... it really only matters when storing the data to disk, too much over-sampling creates giant files
Or perhaps just use a 100 M Ohm R in series with the probe tip this would insure any added resistive ladder does not have an effect on the circuit (L and C -based and other "balanced" or resonant circuits can be amazingly skewed by even a few ohms sometimes).
If your probe is now a "x1", it is likely to provide a total Input Impedance of 1 megohms (industry standard for most 'scopes). Regular "stand-alone" scope x1 probes have almost no DC resistance (the input impedance being provided internally, with only a filter cap for "trim" in the better ones). PC-based scopes usually have most of the impedance in the probe even for a "x1". If the probe is a "x10" it then it likely has 9 or 10 megs of impedance depending on it's above type (you can "Ohm" it with a DMM).
Putting the 100 M Ohm R on the tip of a "x1" probe will decease the measured values by a factor of about 99 (or for 30 kV, to roughly 300 V... Plus or minus the error percentage of the R itself). Luckily precise measurements are usually not required for these HV applications
The reason "real" HV probes are so expensive is they also have built in circuit protection in them besides the added resistance and special heavy-duty high-density insulation to keep the fingers away from shock and arc dangers. But they also often lower the analog Frequency Response of the whole system quite a bit (from the rating of the scope to around only 50 kHz in many cases). So if considering buying one; check that spec too
The home-made version will affect the analog band width of the scope slightly too, but not that badly. Where this could be of prime importance is when specific "fast" waveform shapes are being examined and compared, and/or when transient spikes are being looked for. But as long as all measurements are done with the same scope & probe arrangement, then it should work for comparison purposes.
Also regarding Frequency Response, remember that digital scopes (of which PC-based ones and the new small portable ones are all) have another important factor, that of Sample Rate. You need at least 10 samples per cycle to get an accurate waveform representation, or you risk "Aliasing". So if looking for spikes, the period of the fastest one you expect to see converted to Hz then multiplied by 10 is what you could use for the sample rate setting (a "1 mS" spike equals "1kHz" f, for a single-channel sample rate of "10k"). Of course more than 10 times is ok too... it really only matters when storing the data to disk, too much over-sampling creates giant files
Comment