In my last blog I referred to the return of digital signal processing in the form of a discrete, low-power chip that acts as co-processor to the main applications processor of a smartphone or mobile device. Taking this one step further raises some fundamental questions. How complex are these helping hands in terms of their signal processing capabilities?
Let’s take the iPhone as an example. The latest iPhone 6 uses the newer M8 motion coprocessor (an NXP Semiconductor LPC18B1 chip) in combination with Apple’s very own A8 / APL1011 applications processor as outlined in a recent teardown by TechInsights. The word motion provides the first clue regarding the primary purpose of the chip: monitoring movement to determine if the user is sitting, running, walking, cycling or driving. In fact iPhone apps interrogate this user activity status from the CMMotionActivity class offered by the iOS operating system. To determine user activity, the motion co-processor will most likely take acceleration readings in three axes (x, y and z) from the accelerometer sensor. Repetitively calculating the max and min values will enable distinction between sitting and running/cycling and driving, regardless if the user is holding the phone, has it stashed in a trouser pocket or nestled in a car’s device cradle.
However it’s far more difficult to distinguish between running or cycling as the acceleration in all three axes is quite similar for both activities. This is where measuring yaw (rotation) using the gyroscope sensor comes into play. Cycling has a smoother repetitive motion than jogging. By calculating the spectrum of the yaw rate using an FFT (Fast Fourier Transform), cycling will show a single dominant frequency as determined by the cadence of the cyclist. Of course calculations can be further relaxed if likelihoods are taken into consideration. For example, a measurement showing an impromptu change of states from cycling to driving is somewhat implausible. Statistical models as offered by Bayesian probability inference or Markov chains come to the rescue here. And if all goes wrong and a confident activity guess is out of question, iOS luckily provides the state unknown.
The M8 LPC18B1UK chip is based on ARM’s Cortex M3 core. In contrast to the follow-up Cortex M4 core, the M3 does not include a DSP instruction set. So, it ’s plausible that activity tracking calculations are performed at a low frequency. In fact the chip is clocked at only 0.15 GHz. That in turn makes it battery efficient so it can run constantly without ever taking a break. Even in the iPhone’s standby mode it collects, calculates and caches sensor data. The iPhone stores results for a maximum period of seven days in the LPC18B1UK’s on-chip flash of 1 MByte. Sampling rates of 14-bit accelerometer data are thus probably in the 1 to 2 Hz range. In other words, really slow.
As the iPhone 6 example above underlines, motion detection is all about lightweight processing. Similarly Atmel’s sensor hub solution as found in several smartphones from Samsung (see Wikipedia) use Atmel’s SAM D20 which features an ARM Cortex M0+ core on-chip and no DSP functionality. However Atmel’s sensor hub roadmap points to the follow-up SAM G51/53 which is based on the DSP-rich ARM Cortex 4 core. Sensor hubs are evidently transitioning from lightweight DSP processing to heavy DSP lifting. Recent smartphones from HTC, Nokia, Samsung and Sony confirm this trend: they use Qualcomm’s Snapdragon 800 SoC (system-on-chip) family with an on-chip sensor engine based on their powerful 32-bit Hexagon DSP core that also offers floating-point support. Next-generation smartphones, wearables and other mobile electronics will thus not only capture both user and environmental sensor data but also combine these streams in an eclectic signal processing mix to provide never-seen-before smarts for the user.
Determining motion
Let’s take the iPhone as an example. The latest iPhone 6 uses the newer M8 motion coprocessor (an NXP Semiconductor LPC18B1 chip) in combination with Apple’s very own A8 / APL1011 applications processor as outlined in a recent teardown by TechInsights. The word motion provides the first clue regarding the primary purpose of the chip: monitoring movement to determine if the user is sitting, running, walking, cycling or driving. In fact iPhone apps interrogate this user activity status from the CMMotionActivity class offered by the iOS operating system. To determine user activity, the motion co-processor will most likely take acceleration readings in three axes (x, y and z) from the accelerometer sensor. Repetitively calculating the max and min values will enable distinction between sitting and running/cycling and driving, regardless if the user is holding the phone, has it stashed in a trouser pocket or nestled in a car’s device cradle.
However it’s far more difficult to distinguish between running or cycling as the acceleration in all three axes is quite similar for both activities. This is where measuring yaw (rotation) using the gyroscope sensor comes into play. Cycling has a smoother repetitive motion than jogging. By calculating the spectrum of the yaw rate using an FFT (Fast Fourier Transform), cycling will show a single dominant frequency as determined by the cadence of the cyclist. Of course calculations can be further relaxed if likelihoods are taken into consideration. For example, a measurement showing an impromptu change of states from cycling to driving is somewhat implausible. Statistical models as offered by Bayesian probability inference or Markov chains come to the rescue here. And if all goes wrong and a confident activity guess is out of question, iOS luckily provides the state unknown.
The M8 LPC18B1UK chip is based on ARM’s Cortex M3 core. In contrast to the follow-up Cortex M4 core, the M3 does not include a DSP instruction set. So, it ’s plausible that activity tracking calculations are performed at a low frequency. In fact the chip is clocked at only 0.15 GHz. That in turn makes it battery efficient so it can run constantly without ever taking a break. Even in the iPhone’s standby mode it collects, calculates and caches sensor data. The iPhone stores results for a maximum period of seven days in the LPC18B1UK’s on-chip flash of 1 MByte. Sampling rates of 14-bit accelerometer data are thus probably in the 1 to 2 Hz range. In other words, really slow.
From lightweight processing to heavy lifting
As the iPhone 6 example above underlines, motion detection is all about lightweight processing. Similarly Atmel’s sensor hub solution as found in several smartphones from Samsung (see Wikipedia) use Atmel’s SAM D20 which features an ARM Cortex M0+ core on-chip and no DSP functionality. However Atmel’s sensor hub roadmap points to the follow-up SAM G51/53 which is based on the DSP-rich ARM Cortex 4 core. Sensor hubs are evidently transitioning from lightweight DSP processing to heavy DSP lifting. Recent smartphones from HTC, Nokia, Samsung and Sony confirm this trend: they use Qualcomm’s Snapdragon 800 SoC (system-on-chip) family with an on-chip sensor engine based on their powerful 32-bit Hexagon DSP core that also offers floating-point support. Next-generation smartphones, wearables and other mobile electronics will thus not only capture both user and environmental sensor data but also combine these streams in an eclectic signal processing mix to provide never-seen-before smarts for the user.
No comments:
Post a Comment