Showing posts with label I2C. Show all posts
Showing posts with label I2C. Show all posts

Tuesday 18 September 2007

DC to daylight

Well my machine is not going to pass any EMC regulations, my wife is complaining it is interfering with the digital TV downstairs! The amount of noise coming from the little GM3 gear motor is astonishing. This is the motor switching waveform on the top trace and the other lead of the motor which is at 12V on the bottom trace. The vertical scale is 20V and the timebase 0.4mS.



In this instance the motor is being powered for about 300 uS every 1.3 mS when its negative lead is driven to ground. When the motor is switched off the voltage shoots up above 12V due to the back emf. It gets capped at 48V by the over voltage protection of the BTS134 low side switch that I am using to drive it. It then has a damped oscillation at about 6KHz before settling down to 12V for the remainder of the off period. This will be due to the inductance of the motor windings resonating with the 100nF capacitor I put across the motor terminals. Although it looks violent it is actually the smaller burst of noise on the right which is causing all the problems.

Here is a close up of a similar burst of noise with a timebase of 10uS.



This is around 20MHz and you can see it gets onto the 12V rail. It is caused by the sparks at the motor brushes. Sparks emit RF energy from DC to daylight as I was once told by an EMC expert. My guess is that 20MHz is the resonant frequency of the motor windings with their own stray capacitance when they are momentarily disconnected from my suppression capacitor by the commutator.

One nasty aspect of this sort of noise is that it tends to get less as the motor brushes wear in and then get worse again as they start to wear out. I remember a project where a small motor was mounted close to a PIC. The PIC would frequently crash when the device was first run, but it would soon become impossible to recreate the problem until a new motor was fitted. I read that it is a good idea to "break in" DC motors by running them without any load at a low voltage for a few hours to allow the brushes to become a good fit to the commutator. Too late for mine though!

This is what the noise that gets onto the I²C lines looks like :-



A tough challenge then to make I²C reliable in this environment!

I began by stopping the comms from locking up so that I could add a retry scheme. To do that I had to put timeouts in all the points where I was waiting for the master controller to do something. When it times out I have to reset the controller and do one manual clock pulse to free up the slave before delaying 100uS and then re-enabling the controller. That stopped the comms locking up but did nothing to preserve data integrity. E.g., while I was sending motor commands and reading the temperature the heater came on of its own accord, not good!

The next thing I did was add an 8 bit CRC checksum to the end of each message so that I can detect when a message has been corrupted. 8 bits should be sufficient because the messages are only a few bytes long, i.e. less than 28 bits, and the bursts of noise are only a few bits long, i.e. less than 8. I used a table driven method so the software overhead is just a 256 byte table, one XOR and a table lookup.

I also added a sequence flag to the top bit of the command byte. This alternates when a new command is sent but does not on a retry. This enables the slave to ignore retried commands resent by the master because the previous reply from the slave has been corrupted.

The result seems to be robust even with the massive amount of noise present but I don't like to paper over hardware problems with software. To make systems like this completely reliable I aim to get no retries in normal operation and only rely on the protocol to handle exceptional events. The root cause is the noise from the motor so I decided to have a go at tackling that.

I took a closer look at the noise on the motor leads without any suppression :-



It looks pretty random and different on each wire which is to be expected because the two brushes spark independently. Here is a spectrum analysis :-



It peaks at 23 MHz but must in fact go all the way up to over 600 MHz to affect the television. There is also a lot of noise on the can. My first attempts to suppress it were to put a 100nF disc ceramic across the terminals and earth the can. That did not work well at all. I found that a more modern 1nF capacitor across the terminals worked better and leaving the can floating was better than grounding it because that just put noise on the ground rail. The old and new capacitors are shown below :-



It is no surprise the me that the smaller one works better at higher frequencies because it is so much physically smaller its inductance will be less. It is also much kinder to the MOSFET driving it!

Doing a bit of research I found that it is common practice to connect a capacitor from each terminal to the can, so I added two more 1nF caps forming a triangle. That worked well as it got the retries on the I2C bus down to zero, and also stopped the TV interference. I could have stopped there but there was still plenty of noise visible on the scope. I added two small ferrite bead inductors that I salvaged from a very old disc drive, one in series with each lead, and put a small 10nF ceramic across the cable. That made a fantastic filter leaving no noise visible on the scope.




I also decided to add a back emf clamping diode rather than rely on the over voltage protection of the MOSFET. 48V across a 5V motor is a bit much after all and is high enough to give an electric shock.

Here is the resulting filter mounted on Vero board and fitted to the motor :-



The 1nF cap across the motor is hidden by it and the other two are underneath :-



And here is the new switching waveform with pretty much all overshoot, ringing and noise eliminated :-



If only all EMC problems were that easy!

Saturday 15 September 2007

Bus stops

The I²C bus not working was more of a problem. I had some issues when I just had the spindle controller connected, see bodge-it-and-move-on, but at that time I did not have a storage scope so I could not get to the bottom of it. In the meantime I bought a cheap 100 MHz 2 channel USB scope so I was able to find out what was going wrong.



As I have indicated above, the clock line has two glitches where it should have a proper clock pulse. The master generates the clock but the slave is allowed to stretch it by extending the low portion. Normally it is hard to tell which device is driving the bus but in this case it is obvious because I got the pull up resistors too small for the MSP430. I forgot that the MSP430 is aimed at low power applications so has an unusually low drive capability.

I still wasn't sure which end was the problem so I coded the host end in software so I could see exactly what the slave was doing. It is relatively easy to do an I²C master in software because it does not have any strict timing deadlines. The slave does, so it is more difficult to implement purely in software.

It turns out the problem lies with the mask revision of the MSP430F2013s that I am using. Revision B has several I²C bugs such as pulling the clock low when it shouldn't and no workarounds. I have had my chips for some time, long before I started this project. Two are actually labeled with a mask revision of X which is undocumented. It seems to have at least all the bugs in mask B. The other is labeled as mask B, so none are of any use for I²C!

Very disappointing that these days a device has to have three or four mask revisions just to get something as simple as an I²C module working. This seems to be the way things are going: hardware is becoming just as buggy as software. I wasted some time at work recently discovering that the UART in a PIC18F65J10 occasionally inserts zero bytes in the middle of your packet. These things were lab exercises when I was an undergraduate, now big companies can't get them right.

Fortunately, I had some MSP430F2012s that were mask revision B and the I²C bugs were fixed one revision earlier on that chip. They are slightly different in that they have a 10 bit SAR ADC instead of a 16 bit sigma-delta ADC. This is actually more appropriate for my application but it has a completely different software interface and voltage range.

Once I had swapped to the MSP430F2012 and modified the firmware for the new ADC, the I²C bus sprang into life. That was until I started to run the motor whereupon I got occasional bus lockups. This is due to the massive amount of noise coming from the brushes of the DC motor. I get about 2.5V of noise on the I²C lines

Using I²C without buffers for anything other than inter IC comms is a really bad design decision, I am not sure what came over me. I normally use RS485 differential comms when linking boards that drive motors or other high current loads and have never had a problem with noise. I even think I have publicly criticized the RepRap design for converting the RS232 to 5V signals before sending it around the ring of control boards. It was tempting to give I²C a try because I already had micros that support it, and didn't have UARTs, and my screened cable is only about a foot long. I²C is particularly susceptible to noise though because it is only actively driven low and because it is edge sensitive. Also the data rate I chose is five times faster than RepRap uses and I am using 3.3V logic rather than 5V. I²C also has the nasty feature that corruption can cause the bus to lock up which doesn't happen with asynchronous comms.

One thing I noticed was that earthing the can of the motor made things a lot worse by coupling the noise onto the ground rail. I established that the noise is conducted rather than radiated by running the motor from a separate bench PSU. I also managed to get it to run reliably by adding some 2200pF capacitors to the I²C lines, but that is a horrible bodge! Other things I will try :-
  1. Change the pullup resistors from 1K to 2K2 so that the MSP430 can pull them fully low. That will increase the logic low noise immunity but make the logic high immunity worse.
  2. See if I can program the master to clear the bus lockup.
  3. Add a CRC and packet sequence flags so I can detect errors and do retries.
  4. See if I can suppress the motor better.
If that doesn't fix it I may have to rethink the design. There are such things as high voltage I²C buffers but I expect they are SMT only. I could switch to differential comms or use a better DC motor, or replace it with a small stepper motor. That would also eliminate the need for a shaft encoder but I may struggle for torque.

Monday 18 June 2007

Disaster, frustration, success, disaster

Not posted for a while because I couldn't get the I²C link between my main controller and the spindle controller working. Before that I had a minor disaster when I crashed the drill chuck into a block of plastic. It appears to have bent the shaft out of true as the tool runs a little eccentrically now. The drill was made by Minicraft but they seem to have gone out of business. This made me realise how precarious it is to make something out of surplus components and things you have to hand rather than readily available parts. If I break anything then it might be impossible to get a drop-in replacement. Something like the $4000 XY table I got for $400 would be virtually irreplaceable.

Fortunately the drills seem to crop up on eBay fairly often. I got one almost identical except that it has a collet chuck rather than a three jaw one. It seems to have a bit of end play but this appears to be due to the bearing being loose in the housing rather than having play itself. I should be able to fix it with glue or shim it with a washer.

The I²C problem was a nightmare. I could get it to work if I stepped through the code at either end but not if I ran it. I was fairly sure the software was right because I had used the MSP430F2013 code before on a very similar device and it was based on an app note. The code at the MC9S12NE64 end was a translation of some code I used recently on a Coldfire which seems to have an almost identical I²C module. Not surprising as they are both Freescale devices.

Obviously it was a timing issue but all the timing should be done by the hardware. I am only running it at 100K baud and with 1K pullup resistors over 4 inches of cable I should have no problems. Looking round the Web I found an errata for the MSP430 which seemed to have lots of I²C bugs in the B revision of the chip with no workarounds. Mosts of these are fixed by revision C but my chip seems to have an X where the die revision should be.

Because the problem manifested itself as a lockup it wasn't possible to get a repetitive signal to look at on my analogue scope. I decided I should really have a digital storage scope for these types of problem so I ordered a cheap USB scope front end made in China from a company in the US. $360 plus shipping for 100MHz bandwidth, two channels, 250 MSps. Not bad if it works and the software is half decent. Most software to support hardware seems to be complete rubbish these days. I have had four PC TV cards so far each with decent hardware but software that is utter crap.

While waiting for it to arrive I decided in desperation to try reducing the clock speed of the MSP430. To my amazement dropping it from 16MHz to 8 MHz appeared to fix the problem. So definitely hardware not software then.

Flushed with success I added a Python function to control the drill speed by sending a command via Ethernet to the main controller which forwards it by I²C to the head controller. This worked fine but as soon as the drill made contact with the tool height sensor for the first time the sensor stopped working. I thought the shaft of the drill was ground but actually it was +12V. Now that the power supplies are linked by the comms cable it stuffed 12V into two of the 3.3V inputs of the MC9S12NE64 and not surprisingly they don't work anymore! In fact I am surprised any of it still works but most of it does.

So, major disaster. It will be very difficult to replace the chip as it is fine pitch surface mount. I do have some spare inputs I could use providing the rest of the chip holds out. I can probably still buy another demo board but its starting to get expensive repairing my mistakes.

Curiously the replacement drill seems to have an isolated shaft. I knew the first one wasn't but I assumed it was ground not +12V. However, it wouldn't have made any difference because foolishly I switched it with a low side driver so when it is off both wires are at 12V!

So now I have to rewire the tool sensor inputs, add some protection and cobble the two drills together to make one good one, preferably using the motor with the isolated shaft. The I²C comms need beefing up a bit with a CRC, packet sequence flag and an acknowledgment / retry mechanism. I also want to add drill stall detection and employ the shaft encoders on the XY table to detect that stalling as well. Hopefully then I will avoid any more costly tool crashes and move on to making the RepRap extruder.