Viability Of Psp Nubs


WizardStan said:
Someone posted a problem they were having with their nub and went so far as to actually desolder and remove it, and took pictures of the pieces. No logic circuit on the nub itself, just resistive rubber and 5 legs, suggesting it's a pure analogue device.
Pictures linked from the (updated) wiki page:
http://pandorawiki.org/Nubs

It seems that there are 2 contacts for the centre, and 4 around the edge. 100% analogue.
 
Last edited by a moderator:
I'm guessing that they need the controllers because the firmware is closed source.. :/

sauce: Here
 
Last edited by a moderator:
nikkopt said:
I'm guessing that they need the controllers because the firmware is closed source.. :/

sauce: Here

Again, this closed source firmware is being ran on the controllers, not on the nubs. So the controllers aren't needed as a consequence. OP could have done a totally different nub interface, either connecting it directly to I/Os on the OMAP3/TWL4030 if suitable and available, or using their own auxiliary microcontroller setup with their own firmware. Would be one less closed component in the system.

tsh said:
Pictures linked from the (updated) wiki page:
http://pandorawiki.org/Nubs

It seems that there are 2 contacts for the centre, and 4 around the edge. 100% analogue.

This information is hot off of probing urjaman did last night:

The electrical interface between the nubs and the controller is over 6 pins - 1 analog output and 5 digital inputs. It seems the analog output on the nubs is muxed, and the digital lines are used to select which of 5 channels you want (so it's not a binary selector, and if you drive more than one of them weird things might happen). urjaman speculates that the five outputs are up, down, left, right, and "push"; we know that the nubs are in fact push sensitive from information MWeston revealed earlier. The analog output appears to be variable resistance (making the nub a rheostat), with a high valued resistor and presumably a constant current source for converting it to a suitable voltage range.

Having a mux means that you need only 1 ADC to get the value.. but it also means you need more digital work performing the switching to sweep through all the channels. So if it were done on the OMAP3 the CPU load could be higher than expected. Right now I believe the microcontroller is sending out I2C packets at something like 50Hz, so you'd need to manually switch GPIOs at at least 250Hz to achieve this. This rate isn't a very big deal though (once every 2 million cycles at 500MHz, would probably use at most 0.1% or so CPU time, depending on what the options are for retrieving ADC values off TWL4030 - you'd probably want to oversample several values between the extremes of the period, through out the edges where the switching maybe wasn't stable, and take the average)

At the very least having two ATmegas instead of one is wasteful, since there are enough ADC channels and GPIOs available on one for this task.
 
Last edited by a moderator:
Okay, here's a picture, because it was really hard to try and explain this on IRC:
nub_schematic.png

I did that in 5 mins on paint, just to show how I think one nub is attached to the microcontroller (no pin names because im lazy).
This is how I think they're muxing the analog lines (Rvar's are nub's resistive rubber or whatever, 1Mohm resistor is on PCB).
You can read the value of one resistor by driving one of those digital lines high, and tristating others.

EDIT: I think the main reason why we have 2 atmels instead of one is that the company had not designed the nubs to be used in pairs, thus requiring 1 AVR per nub. Another reason could be that their proprietary logic could (in theory) be so complex that one AVR wouldnt be able to do it for two nubs, but i doubt it.

EDIT2: :ph34r:
 
Exophase said:
At the very least having two ATmegas instead of one is wasteful, since there are enough ADC channels and GPIOs available on one for this task.
This, I think, was dictated by the closed source firmware. The nub maker may have said "one nub, one microcontroller" and that was the only way they'd do it. Non-closed firmware they probably could have changed to only require one controller.
 
Last edited by a moderator:
urjaman said:
This, I think, was dictated by the closed source firmware. The nub maker may have said "one nub, one microcontroller" and that was the only way they'd do it. Non-closed firmware they probably could have changed to only require one controller.

Yeah, and as I'm seeing MWeston mentioned the nub company indeed sold the ATmegas with the nubs - preflashed with the proprietary firmware.

The situation is, in my opinion, sloppy. As far as I'm concerned the ATmega should be part of a reference design and the source should be part of an app note, not both part of the product. That is, unless their business model is to make money reselling microcontrollers and dodgy, proprietary programs. This strategy might even work for a while, but not once the customers realize that they'd be much better off not buying components they don't necessarily need and source code they can probably better write themselves, and an emergent competitor capitalizes on this by selling $5+ cheaper nubs.

I can only guess all the chaos in getting a nub supplier on Pandora resulted in OP conceding to whatever design requirements the manufacturer dictated.. but I guess hindsight is 20/20 and I probably don't know all the details like usual.

urjaman; Okay, I guess me calling it an analog mux wasn't exactly appropriate.

>> Usual 'I know nothing about electronics' disclaimer warning >>

So does this mean that driving the GPIOs off the ATmega are providing a constant current source when driven high, and zero amps when floating, thus converting the varying resistance to varying voltage which is divided by the 1M resistor into the ADC's reference range? And the values are added in series? If this is true, are GPIOs off an ATmega8L really very decent as precision current sources? Or am I wrong and current is regulated some other way? Or am I just wrong about everything ;)
 
Last edited by a moderator:
Exophase said:
So does this mean that driving the GPIOs off the ATmega are providing a constant current source when driven high, and zero amps when floating, thus converting the varying resistance to varying voltage which is divided by the 1M resistor into the ADC's reference range? And the values are added in series? If this is true, are GPIOs off an ATmega8L really very decent as precision current sources? Or am I wrong and current is regulated some other way? Or am I just wrong about everything ;)
AVR ADC senses voltage only. The GPIOs drive quite well for digital logic (10-15mA would be my guess from 2.8V), but this is irrelevant
(NOTE: I'm having trouble trying to understand your questions.)
Why would you need a constant current source? Voltage over whole R (Rvar + 1Mohm) is 2.8V. I= 2.8V/(Rvar+1Mohm). U at ADC0 = I * 1Mohm. If Rvar=0, voltage=2.8V. If Rvar=inf, voltage=0.

EDIT: The problem I'm seeing here is that the ADC doesnt seem to be even near accurate enough to sense the low resistance change I had when I moved the nub (like from 200 to 300 ohm) with that 1Mohm resistor...

EDIT: Of course the resistors are also connected to eachother via ADC0, so maybe they're driving one output to ground and other one to high, thus creating a different-looking voltage divider...
 
Last edited by a moderator:
MWeston and Craigix always told that the Nubs were custom made and this sounds like the OP Team gave the specs to the Factory and not vice versa. If you get a custom product, you usualy frame the specs, at least for me this would be the logic way.
So I guess, the nubs are like they have to be, otherwise it wouldn't work. But I'm still wondering why even a custom nub had to stay at hte tiny 2mm travel that was also often mentioned. OPT could have choosen 3mm or more I guess. So are the Nubs REALLY custom made especialy for the Pandora? :huh:

However, I guess we have to deal with these Nubs for a long time, so we should make the best out of it. I don't have any skills to replace the Nubs at all. ^^""
 
fusion_power said:
So are the Nubs REALLY custom made especialy for the Pandora? :huh:
Somebody on IRC (ED?, i dont remember, sorry) said that this nub tech was originally used for medical applications, then modified for use on the pandora, or something.
 
Last edited by a moderator:
urjaman said:
Why would you need a constant current source? Voltage over whole R (Rvar + 1Mohm) is 2.8V. I= 2.8V/(Rvar+1Mohm). U at ADC0 = I * 1Mohm. If Rvar=0, voltage=2.8V. If Rvar=inf, voltage=0.

Okay thanks, I understand now. So basically the change in resistance is changing the value of the voltage divider. I think this is a problem if the resistance is linear to the motion of the nubs, because the voltage is not linear to resistance. The 1M resistor of course biases this but if it does so in any significant manner it overwhelms your range then you just end up spending all your precision measuring that 1M, like you said.

EDIT: The problem I'm seeing here is that the ADC doesnt seem to be even near accurate enough to sense the low resistance change I had when I moved the nub (like from 200 to 300 ohm) with that 1Mohm resistor...

urjaman said:
EDIT: Of course the resistors are also connected to eachother via ADC0, so maybe they're driving one output to ground and other one to high, thus creating a different-looking voltage divider...

Could you describe to me what the effective division would be in a situation like that? I mean, it doesn't end up adding different varying resistances together, does it?
 
Last edited by a moderator:
Exophase said:
Could you describe to me what the effective division would be in a situation like that? I mean, it doesn't end up adding different varying resistances together, does it?
Total resistance cannot be seen, but you can see the ratio's of resistances.
I think I'm pretty sure what those two pins/resistances that sense up-down direction are, so my example is with that:
XT1-ADC0 was 240 ohm when up, 300 when down. ADC0-XT2 was 270 ohm when up, 200 when down.
Voltage when up, given XT1 is grounded, XT2 is high, would be 1.31764706V.
Voltage when down, would be 1.12V.
Difference sounds small, but would be 73 units on the ADC scale (enough to give the current output of -32...32 in each axis).

EDIT: just to add, with this method there would be like 20 _possible_ "channels" to read from, if wanted. I dont know the exact behaviour of the nub, so I'm not sure on how many of these would convey usable data.
 
Last edited by a moderator:
A great idea for OPT and mabe davec - design an anolog nub with retracting stick ,you press it down and it springs up for use ,press it down again and it drops to the same level as the pandora so you can close the lid ,this would make the anolog nub an anolog stick like games consoles.

Now the other part of the idea would be (twist and go anolog/nubs) you can actuly turn and unlock the entire nub free from the pandora unit ,easy to replace and you can buy spares.


It would be very nice to have both ideas a reality !
 
urjaman said:
Total resistance cannot be seen, but you can see the ratio's of resistances.
I think I'm pretty sure what those two pins/resistances that sense up-down direction are, so my example is with that:
XT1-ADC0 was 240 ohm when up, 300 when down. ADC0-XT2 was 270 ohm when up, 200 when down.
Voltage when up, given XT1 is grounded, XT2 is high, would be 1.31764706V.
Voltage when down, would be 1.12V.
Difference sounds small, but would be 73 units on the ADC scale (enough to give the current output of -32...32 in each axis).

EDIT: just to add, with this method there would be like 20 _possible_ "channels" to read from, if wanted. I dont know the exact behaviour of the nub, so I'm not sure on how many of these would convey usable data.

Okay, so this divider should g ive the following, right? Where R1 is tied to XT1 and R2 to XT2.

Vout = (2.8 * (R1 / (R1 + R2))) * (1000000 / (R1 + R2 + 1000000)

I get 1.317V for the up value, but for down I get 1.679V. And because I'm so ignorant about circuits I checked with a simulator and it says the same thing.

If I swap so XT1 is high and XT2 is grounded, ie:

Vout = (2.8 * (R2 / (R1 + R2))) * (1000000 / (R1 + R2 + 1000000)

Then I get 1.119. But of course I won't get the same value as you for the first one this way. Is this what you meant to do, or did you make a mistake?

No matter what, this still seems like too small precision. And I wonder what the purpose of the 1M resistor is. And are you sure the reference voltage is 2.8?
 
Last edited by a moderator:
Hmmm... Thought I posted here, but it seems to have been lost in the ether/proxy.

It might be possible to dynamically switch in a different resistor to 1M if the ADC range is a real problem. Alternatively, using +x and -x as a coupled pair may allow the full range to be measured with only half the dynamic range.

/OT - mali, thanks :)
 
Back
Top