Position-Sensor

Input/Output Devices

W. Bolton , in Programmable Logic Controllers (Sixth Edition), 2015

two.1.6 Position/Deportation Sensors

The term position sensor is used for a sensor that gives a mensurate of the distance between a reference point and the current location of the target, while a displacement sensor gives a measure of the altitude betwixt the nowadays position of the target and the previously recorded position.

Resistive linear and angular position sensors are widely used and relatively cheap. These are also called linear and rotary potentiometers. A DC voltage is provided across the full length of the rail and the voltage signal betwixt a contact that slides over the resistance rails and ane end of the track is related to the position of the sliding contact between the ends of the potentiometer resistance runway (Effigy two.16). The potentiometer thus provides an analog linear or angular position sensor.

Effigy 2.16. Potentiometer.

Another form of displacement sensor is the linear variable differential transformer (LVDT), which gives a voltage output related to the position of a ferrous rod. The LVDT consists of three symmetrically placed coils through which the ferrous rod moves (Figure 2.17). When an alternating electric current is practical to the main roll, alternating voltages, v ane and five two, are induced in the two secondary coils. When the ferrous rod core is centered between the two secondary coils, the voltages induced in them are equal. The outputs from the two secondary coils are continued so that their combined output is the difference between the two voltages, that is, v onev 2. With the rod cardinal, the ii alternating voltages are equal and so at that place is no output voltage. When the rod is displaced from its central position, at that place is more than of the rod in one secondary coil than the other. Every bit a issue, the size of the alternating voltage induced in one coil is greater than that in the other. The difference between the two secondary scroll voltages, that is, the output, thus depends on the position of the ferrous rod. The output from the LVDT is an alternating voltage. This is usually converted to an analog DC voltage and amplified before inputting to the analog channel of a PLC.

Effigy 2.17. LVDT.

Capacitive deportation sensors are substantially just parallel plate capacitors. The capacitance will modify if the plate separation changes, the area of overlap of the plates changes, or a slab of dielectric is moved into or out of the plates (Figure 2.18). All these methods tin be used to give linear deportation sensors. The change in capacitance has to exist converted into a suitable electrical signal past indicate conditioning.

Figure 2.eighteen. Capacitor sensors: (a) changing the plate separation, (b) irresolute the area of overlap, and (c) moving the dielectric.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780128029299000029

Using the Luenberger Observer in Motility Control

George Ellis , in Control Arrangement Design Guide (Quaternary Edition), 2012

eighteen.1.3 Position Feedback Sensor

Luenberger observers are most effective when the position sensor produces express noise. Sensor noise is often a problem in move-control systems. Noise in servo systems comes from two major sources: EMI generated past power converters and transmitted to the control section of the servo system, and resolution limitations in sensors, especially in the position feedback sensor. EMI tin be reduced through appropriate wiring practices 67 and through the selection of components that limit noise generation.

Resolution noise from sensors is hard to deal with. Luenberger observers frequently exacerbate sensor-noise problems. The availability of high-resolution feedback sensors raises the likelihood that an observer will substantially amend arrangement functioning.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/commodity/pii/B9780123859204000187

Using the Luenberger Observer in Motion Command

George Ellis , in Observers in Control Systems, 2002

viii.ane.4.4 Sensor Noise

Luenberger observers are most constructive when the position sensor produces limited noise. Sensor dissonance is often a problem in motion-control systems. Noise in servo systems comes from two major sources: EMI generated by power converters and transmitted to the control department of the servo system, and resolution limitations in sensors, peculiarly in the feedback sensor. EMI can be reduced through advisable wiring practices [30] and through the selection of components that limit noise generation such as those that comply with European CE regulations.

Noise from sensors is difficult to deal with. As was discussed in Chapter 7, Luenberger observers often exacerbate sensor-dissonance problems. While some authors have described uses of observers to reduce noise, in many cases the observer will have the opposite outcome. As discussed in Chapter 7, lowering observer bandwidth will reduce noise susceptibility, but it also reduces the ability of the observer to improve the system. For example, reducing observer bandwidth reduces the accurateness of the observed disturbance bespeak. The availability of loftier-resolution feedback sensors raises the likelihood that an observer will substantially improve organization performance.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780122374722500092

Surgical navigation

Wang Manning , Song Zhijian , in Estimator-Aided Oral and Maxillofacial Surgery, 2021

iii.3 Position sensor

From the working principle shown in Fig. 7.two we tin can run into that the position sensor plays a crucial role in surgical navigation. Currently, position sensors used in surgical navigation tin can exist divided into two categories: optical sensor and electromagnetic sensor.

Optical position sensor tracks objects by stereo vision, and co-ordinate to the light used to generate vision, information technology can be farther divided into two sub-categories: visible light sensor and infrared light sensor. The optical position sensors of Northern Digital Inc., which is currently the leading position sensor provider for surgical navigation worldwide, belong to infrared light sensors. In this kind of sensors, an infrared emitter is used to emit infrared low-cal to the viewing range of the sensor, and the reflected infrared low-cal by a special-designed sphere is captured by 2 cameras on the sensor. The sphere is the minimal unit that tin can be tracked by the sensor. The position of the sphere in PCS is calculated from stereo-vision. Tracked tools are designed with several spheres on it and the spheres on each tool have a unique geometry. When the two cameras of the sensor locate multiple spheres in PCS, it matches the loaded tools' geometry to the tracked spheres and a tracked tool is found when its sphere geometry is matched. The transformation from the tool's local coordinate arrangement divers on the spheres to PCS is calculated co-ordinate to the spheres' coordinates in the tool coordinate organisation and that in PCS. In infrared light position sensor, an infrared light emitting diode can be used to replace spheres. In this scenario, the sensor tracked the diode according to the light emitted from it and no low-cal emission is needed from the sensor. This kind of tools is commonly called active tools, and accordingly the tools using spheres are called passive tools. Also infrared position sensor, there are optical position sensors that use visible light to track tools. The unit of measurement to be tracked by this kind of sensor is a specially designed marker, which is usually composed of crossed black and white checks. The marker's part is the aforementioned as the sphere in infrared sensor, and the tracked tools are designed to contain several markers with a unique geometry.

The latest version of infrared position sensor from Northern Digital Inc., Polaris Vega®, has a root mean square tracking mistake of as low as 0.12 mm in tracking a single sphere. Commercially available sensors all have a root mean square tracking fault of around 0.20 mm, which is enough for surgical navigation. It should be noted that what we actually intendance most in surgical navigation is how accurately a tool instead of a sphere or a marker is tracked. Studies shows that if the tracked tools are properly designed and manufactured, its tracking accurateness is at the aforementioned level with tracking a single sphere [26]. In using a surgical navigation system, we should exist careful not to destroy the geometry of the tracked tools. If the geometry of the office of a tool with spheres or diodes is destroyed, the tool may become untrackable or the tracking error will increase. On the other hand, if geometry of the function without the spheres or diodes on a tracked tool is destroyed, this may bring big fault to the tracked involvement point on the tool. For example, if the tip of a navigation probe is banded, information technology tin can withal be tracked normally only the tracked probe tip is no longer the real tip of the tool and this will be very misleading to the users. For this reason, many surgical navigation systems have a pace to verify the tip of the navigation tool earlier it is used. I final word near infrared position sensor is that the tracking accurateness of agile and passive tools is equivalent [26].

Electromagnetic position sensor uses dissimilar physical principle to track tools and accept unlike working condition from optical position sensors. Information technology generates an electromagnetic field in front of field generator, and a small tool is placed in the electromagnetic field then that small currents are induced past the electromagnetic field, which varies in a designed pattern. The current depends on the relative position and direction between the tool and the electromagnetic field then that its position and orientation tin be calculated from the electric current signals.

By and large speaking, the tracking error of electromagnetic position sensors is higher than that of optical position sensors, just it is as well adequate for virtually surgical navigation applications. The biggest advantage of electromagnetic position sensor is that it needs not directly see the tracked tools, which ways that the sensor won't be occluded only when it resides in the electromagnetic field. On the contrary, optical position sensors must directly see the spheres or diodes on a tracked tool, and otherwise the tool will be lost. The limitation of electromagnetic position sensor is that it needs a precisely controlled electromagnetic field and whatsoever objects that may interfere with the electromagnetic field, such as some metal surgical tools or electronic equipments are not allowed in the electromagnetic field.

Read full chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780128232996000079

SLAM for Pedestrians and Ultrasonic Landmarks in Emergency Response Scenarios

Carl Fischer , ... Mike Hazas , in Advances in Computers, 2011

3.ii.3 SLAM

Visualizing the output of the SLAM filter as we did when the sensor positions were known in advance can be misleading because the estimated coordinates of the sensors change during the class of the experiment as they are updated by the SLAM process. Figure 14 shows that in all the experiments, the estimated positions of the sensors change over time. Notation that these plots are in an arbitrary coordinate system and practice non direct map to the surveyed sensor positions without showtime finding and applying the most advisable rotation and translation. Initially, they move a lot equally more measurements are taken just even after they accept stabilized they proceed to drift slowly, and several sensors appear to motion together in the same direction. Since our map is defined by the sensor positions, information technology also moves. In other words, this SLAM filter but gives positions of the sensors and of the pedestrian relative to the other sensors, not necessarily in an accented coordinate system. This ways that the filter might provide different coordinates for the pedestrian after he returns to a previous location but this could still be correct (in a relative sense) if the estimated positions of the sensors have also inverse during that fourth dimension. Conversely, if the estimated position of the pedestrian remains the same every bit previously but the estimated coordinates of the sensors have changed, the result could be wrong (in a relative sense). This is because we are performing online SLAM. Offline methods such as GraphSLAM [41], which optimize the complete trajectory estimate and map subsequently all data has been recorded, practise not endure from this limitation. They can be used to display the complete path and the landmarks on a unmarried map.

Fig. 14. Changes in the estimated positions of the sensors during the experiments. Initially the estimates change a lot, so they stabilize but proceed to drift slowly. (A) Large room. (B) All rooms (T) i. (C) All rooms (T) two. (D) All rooms (T) three. (E) All rooms (T) iv. (F) All rooms (O). (G) Random 1. (H) Random 2.

Considering of the shifting position estimates of the sensors, information technology is not necessarily helpful to plot the estimated path of the user. Nonetheless, we can show snapshots of the sensor position estimates. In Fig. 15, nosotros show the estimated positions of the sensors at the end of each experiment. These estimates were aligned to the surveyed sensor positions after running a nonlinear regression to determine the affine transformation (rotation and translation) that minimizes the sum of squared errors between surveyed and estimated positions. The line represents the Ubisense estimated path which is already in the aforementioned coordinate infinite as the surveyed sensor positions.

Fig. 15. Estimated positions of sensors at the end of each experiment rotated and translated to minimize the distance with the surveyed positions. (We have called not to testify the estimated pedestrian path for reasons explained in Section 3.2.iii). (A) Big room. (B) All rooms (T) one. (C) All rooms (T) 2. (D) All rooms (T) iii. (Eastward) All rooms (T) 4. (F) All rooms (O). (G) Random 1. (H) Random 2.

Three of the experiments requite reasonable results in terms of sensor positions. Figure fifteenF was based on PDR data with just a small corporeality of drift then practiced results are expected, merely Fig. 15East was like in terms of drift simply at to the lowest degree v of the sensors are placed more than iii   m away from where they should exist. This is a consequence of the ambiguities in the initialization method—each of these sensors is placed on the incorrect side of the path. As explained earlier in the chapter, this occurs due to the combination of ii factors—(ane) the measurements used for the initialization of the sensor are taken from points which are nearly collinear; (2) the bearing measurements are too noisy to make up one's mind which of the two possible positions is right and our heuristic selection method (Fig. 7) fails. The consequences can exist quite small if the misplaced sensor is only in range of the straight section of path with which it was initialized, for case the two sensors in the upper right corner of Fig. fifteenE. Only in general, these types of errors volition create error in the pedestrian position estimates. This problem occurs for our data considering the sensors were deployed in advance (although their positions were non made bachelor to the SLAM algorithm). If the sensors were deployed past the pedestrian placing them at their anxiety equally they walk around, then they could be directly initialized with the pedestrian's electric current position. The orientation would be trivial to initialize as the pedestrian moves on and a bearing measurement is taken. This blazon of transmission deployment seems suitable for a real-world scenario if only a few sensors are required.

Figure fifteenK and H provides particularly good estimates. This is probably because these paths did not include any directly sections, therefore, the initialization was less likely to be ambiguous and sensors that were initialized incorrectly were adjusted thanks to the variety of range/bearing measurements taken from many different positions. In other words, the SLAM solution benefits from favorable geometric dilution of precision. This bears some similarity to situations where planes or ships are required to perform a particular maneuvre in order to improve tracking of a target [55].

In order to evaluate the operation of the filter in a more than quantitative manner, we wait at the range and bearing errors between the sensors, and between the sensors and the pedestrian for every update (Figs. sixteen and 17). These errors reverberate how accurately the sensors and the pedestrian are positioned relative to each other in the instance where sensor positions are known in advance, and in the consummate SLAM example with no prior knowledge. As expected, the errors for SLAM are college than when the sensor positions are known. But in Fig. 16G and H, the range errors for SLAM seem to be quite shut to the errors for the prior knowledge example, with a 90th percentile value of less than ane.ii   m. This improved performance could be due to the blazon of path (random, unstructured) simply we note that these information sets were recorded in the large part only, and did non embrace the rest of the test area. In almost all cases, the 90th percentile range error between the pedestrian and the sensors is less than ii   m. This value reflects how well the pedestrian tin can exist located in the map.

Fig. 16. Cumulative range error distributions for the full duration of each experiment. (A) Large room. (B) All rooms (T) 1. (C) All rooms (T) 2. (D) All rooms (T) 3. (E) All rooms (T) 4. (F) All rooms (O). (G) Random 1. (H) Random 2.

Fig. 17. Cumulative begetting error distributions for the total elapsing of each experiment. (A) Large room. (B) All rooms (T) 1. (C) All rooms (T) 2. (D) All rooms (T) 3. (E) All rooms (T) four. (F) All rooms (O). (G) Random 1. (H) Random 2.

Figures 16 and 17 as well bear witness the innovations. The range innovations tend to be smaller than the corresponding estimated errors. This again suggests that using the position estimates from Ubisense every bit groundtruth overestimates some of the errors.

The bearing errors from Fig. 17 are more difficult to translate because they depend on the position error of the pedestrian and sensors, and on the orientation error of the sensors. For example, if a sensor's orientation is correct simply the pedestrian is estimated to be a few centimeters in front of it, instead of a few centimeters behind information technology, the bearing fault could exist 180°.

In Figs. 16 and 17, we showed errors betwixt all sensors, but information technology would also be reasonable to only take into account sensors that are either close to or far from the pedestrian depending on whether we are interested in local accuracy (position relative to nearby sensors, necessary for navigation) or global accurateness (position relative to sensors which are far away, necessary for mission planning). In Figs. 18 and nineteen, we have plotted separately the errors between sensors, and between pedestrian and sensors when they are less than 3   m apart, and those errors when they are more than than iii   thou apart. The 3   m limit is arbitrary, but corresponds to an area which could quickly exist searched by a firefighter equipped with a long handled tool. These figures show that in many cases the local range error for SLAM is shut to the error when the sensor positions are known. When the far range errors are larger than the local errors, this is due to large-scale distortion of the sensor positions. Large-calibration distortion makes it difficult to overlay the estimated sensor positions onto a map or floorplan, merely should not touch on indoor navigation scenarios where a firefighter uses simply nearby sensors equally landmarks to progress toward a target in small steps.

Fig. 18. Cumulative range error distributions for near sensors (≤   three   m) and far sensors (>   3   m). (A) Large room. (B) All rooms (T) 1. (C) All rooms (T) 2. (D) All rooms (T) 3. (E) All rooms (T) 4. (F) All rooms (O). (G) Random 1. (H) Random 2.

Fig. nineteen. Cumulative begetting error distributions for most sensors (≤   iii   m) and far sensors (>   3   grand). (A) Large room. (B) All rooms (T) 1. (C) All rooms (T) ii. (D) All rooms (T) 3. (Due east) All rooms (T) four. (F) All rooms (O). (G) Random i. (H) Random two.

The bearing errors from far away sensors can be much smaller than the local begetting errors. Due to uncomplicated geometric properties, distance tends to dilute the outcome of position error on the begetting. This is particularly visible in the case where sensor positions and orientations are known. SLAM bearing errors are generally large. Simply Fig. nineteenF and H shows 90th percentile of begetting errors less than 60° for both short range and long range.

In some cases, despite our heuristic initialization method (Fig. vii), the SLAM algorithm sometimes places sensors on the wrong side of a directly department of path due to the symmetry ambivalence. This does not bear on local range errors between the sensor and the pedestrian because of the symmetry merely the errors for far abroad sensors are increased. Sensors in this situation are likely to take very inconsistent orientations and thus the bearing errors for both near and far away sensors will be high (Fig. 19C and G).

All our experiments can be split into sections of similar duration during which a similar path was walked. For experiments (B) to (F), the same path was repeated several times. For experiments (A), (G), and (H), the pedestrian returned to the same point at regular intervals. Whereas the previous figures prove the aggregate errors over the full duration of the experiment, Figs. 20 and 21 give the median range and bearing errors for each section of the path. In a few cases, at that place is a noticeable improvement at each iteration only for the other cases the median mistake remains constant or even increases. For these latter cases, this could mean that sensor and pedestrian position estimates are as good as they are going to get after the first department and that in that location is no further improvement. After a fire fighter has explored the building once and deployed the sensors, the system is immediately gear up to assistance the following teams notice their way.

Fig. 20. Median range errors for each successive section of the path. (A) Large room. (B) All rooms (T) one. (C) All rooms (T) 2. (D) All rooms (T) iii. (E) All rooms (T) 4. (F) All rooms (O). (Chiliad) Random one. (H) Random 2.

Fig. 21. Median bearing errors for each successive department of the path. (A) Large room. (B) All rooms (T) 1. (C) All rooms (T) 2. (D) All rooms (T) 3. (E) All rooms (T) 4. (F) All rooms (O). (G) Random ane. (H) Random 2.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123855145000033

Free Egress Electrified Locks

Thomas Norman , in Electronic Access Command, 2012

Electric Strikes

Electrical Strikes are access command devices that can be used with conventional mechanical locks to let the door to be opened regardless of the lock/unlock condition of the mechanical lock. The Electrical Strike replaces the fixed strike faceplate usually found on door frames, into which the mechanical lock latchbolt extends when it closes, thus latching the door.

Just like a fixed strike faceplate, the Electrical Strike normally presents a ramped surface for the mechanical latch to close against. Every bit the door closes, the latchbolt retracts momentarily to arrange the ramped surface, and then springs back to extend again into the latch pocket when the door is fully airtight. The latch is kept in identify past the back of the ramped surface, called a latch keeper.

However, unlike a conventional fixed strike faceplate, the electric strike has a solenoid that controls the position of the latch keeper. In its quiescent land, the latch keeper is fixed, like the keeper of a fixed strike faceplate. But when the solenoid is activated, the latch keeper can swing aside, allowing a space where the extended latch is gratuitous to open with the door. Thus the door can be opened while the lock is still locked.

Electric Strikes are available in either fail-safe or fail-secure functions. Fail-safe, also called neglect-open, causes the electric strike to lock when power is applied. Fail-safe unlocks the electric strike when power is practical.

Electric Strikes are bachelor in either Ac or DC versions. Ac versions "buzz" when operated, while DC versions are nearly silent, except for a quiet "click" when the lock is released. DC electric strikes are sometimes equipped with a buzzer to bespeak a person exterior that the strike is open.

Switches Bachelor for Electric Strikes

Latch-Commodities Monitoring Switch: Electric Strikes tin can oft be ordered with a latchbolt position sensor, which many apply like a door position switch to decide if the door is open or closed. Please note that the latchbolt position sensor is not equivalent to the door position switch. Information technology is possible to place a wad of paper in the latch pocket belongings the latchbolt position sensor closed, thus fooling the Admission Control System into thinking that the door is airtight when it is actually open up.

Dead-Bolt Monitoring Switch: Electric Strikes built to accommodate mechanical locks with dead-bolts may also be equipped with a dead-commodities position monitoring switch.

Lock Status Monitoring Switch: This switch identifies the condition of the electric strike's locking mechanism, telling the Admission Control Organisation if the strike is locked or unlocked.

Y'all should practise caution when modifying an existing door frame to have an electric strike to be certain that you are not doing so on a fire-rated door frame. Such a modification would void the fire rating of the frame.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123820280000107

Gratuitous Egress Electrified Locks

Thomas L. Norman CPP/PSP , in Electronic Access Control (Second Edition), 2017

Electric Strikes

Electric Strikes are admission control devices that can be used with conventional mechanical locks to let the door to be opened regardless of the lock/unlock status of the mechanical lock. The Electric strike replaces the fixed strike faceplate normally found on door frames, into which the mechanical lock latchbolt extends when it closes, thus latching the door.

Just like a fixed strike faceplate, the Electric strike normally presents a ramped surface for the mechanical latch to close against. As the door closes, the latchbolt retracts momentarily to accommodate the ramped surface, and and then springs back to extend once again into the latch pocket when the door is fully airtight. The latch is kept in place by the back of the ramped surface, called a latch keeper.

Still, dissimilar a conventional stock-still strike faceplate, the electric strike has a solenoid that controls the position of the latch keeper. In its quiescent state, the latch keeper is stock-still, similar the keeper of a fixed strike faceplate. Just when the solenoid is activated, the latch keeper tin can swing aside, allowing a space where the extended latch is gratuitous to open with the door. Thus the door tin can exist opened while the lock is withal locked.

Electric strikes are bachelor in either neglect-safe or fail-secure functions. Fail-safe, as well called neglect-open, causes the electric strike to lock when power is practical. Neglect-safety unlocks the electric strike when power is applied.

Electric strikes are available in either alternating current (Air-conditioning) or directly current (DC) versions. AC versions "buzz" when operated, while DC versions are nearly silent, except for a serenity "click" when the lock is released. DC electric strikes are sometimes equipped with a buzzer to bespeak a person outside that the strike is open.

Switches available for electric strikes

Latch-Commodities Monitoring Switch: Electric strikes can oftentimes be ordered with a latchbolt position sensor, which many apply similar a door position switch to determine if the door is open up or airtight. Note that the latchbolt position sensor is non equivalent to the door position switch. It is possible to place a wad of paper in the latch pocket holding the latchbolt position sensor closed, thus fooling the access control organisation into thinking that the door is closed when it is really open.

Dead-Commodities Monitoring Switch: Electrical strikes built to accommodate mechanical locks with dead-bolts may likewise be equipped with a dead-bolt position monitoring switch.

Lock Condition Monitoring Switch: This switch identifies the condition of the electric strike's locking mechanism, telling the admission control system if the strike is locked or unlocked.

You lot should exercise caution when modifying an existing door frame to accept an electric strike to be certain that you are not doing so on a fire-rated door frame. Such a modification would void the fire rating of the frame.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128054659000105

Measuring the human being

Jonathan Lazar , ... Harry Hochheiser , in Research Methods in Man Figurer Interaction (Second Edition), 2017

13.iii.1 Muscular and Skeletal Position Sensing

The Wii remote, introduced past Nintendo in 2005, introduced a new era of consumer electronics capable of sensor position and movement. Using a combination of accelerometers and optical sensing, the Wii remote provides multiple degrees of freedom, allowing natural inputs for games such as tennis and bowling. In add-on to commercial success, the Wii was rapidly adopted past HCI researchers who explored the possibility of enhancing the range of applications to include possibilities such as gesture recognition (Schlömer et al., 2008), and studied the use and adoption of the new games, particularly in social contexts (Voida and Greenberg, 2009).

Although the Wii might have been the first notable commercial success, HCI researchers have been working with novel sensing devices for years. Early published HCI piece of work with accelerometers predates the Wii by several years (Levin and Yarin, 1999). The apply of accelerometers in HCI research exploded with the advent of ubiquitous availability in smartphones. Applications have included sensing posture to help stroke survivors (Arteaga et al., 2008), identifying repetitive and troublesome behavior from students with autism spectrum disorder (Albinali et al., 2009), autumn detection (Fudickar et al., 2012; Ren et al., 2012; Mehner et al., 2013), and even detecting bad driving (Singh et al., 2013). Smartphone accelerometers take also been used as mouse-like input devices (Yun et al., 2015) and for gesture recognition (Kim et al., 2016).

Moving beyond accelerometers in smartphones, contempo years take seen an explosion in the availability of wrist-worn sensors. Although wrist-watch heart-rate monitors have been available for years, the current generation of fitness sensors go much farther, adding the capability to runway steps, slumber, floor-climbing, and energy usage, in combination with integrated smartphone functionality. Although concerns almost the accuracy of some measurements may limit the utility of these devices for some purposes (Kaewkannate and Kim, 2016; Wallen et al., 2016), feedback provided by these tools may help users understand and increase the efficacy of their habits. The claiming of understanding how these tools are used over time can be significant, as technical challenges, nuanced user behavior oftentimes involving multiple devices, accuracy, inappropriate mental models, and other challenges complicate effective utilise of the tools and estimation of resulting data (Harrison et al., 2014; Rooksby et al., 2014; Yang et al., 2015). Every bit these devices continue to grow in capability and popularity, further enquiry will undoubtedly proceed to inquire how these monitoring capabilities can be used more than effectively. For example, 1 study of physical activity monitors found that customized plans that encouraged users to reverberate on exercise strategies were more constructive than automatically constructed plans (Lee et al., 2015).

Smartwatches such as the Apple Watch provide wrist-worn easy access to a wider range of smartphone facilities than those provided by fitness sensors. These watches have been used to develop approaches for sensing gestures made past fingers (Xu et al., 2015; Wen et al., 2016; Porzi et al., 2013; Ogata and Imai, 2015). The 2016 instance of the Apple tree Scout presents more opportunities for HCI researchers, particularly as new tools are adult to explore the use of the sentinel as an unobtrusive computing device in everyday settings (Bernaerts et al., 2014; Quintana et al., 2016). Exercise and fitness sensors provide like capabilities—run across Chapter xiv for boosted discussion of these sensors.

Microsoft's Kinect takes a different approach to sensing position and motion. Like the Wii remote, Kinect comes out of the gaming world—in this example, Microsoft'southward Xbox. Kinect includes a depth sensor, cameras, and microphones capable of capture body motion in 3D, and recognizing faces and voices (Zhang, 2012). Kinect sensors take been used in a broad range of contexts, including for assessing posture and motion (Clark et al., 2012; Dutta, 2012), observing audience responses to interactive displays (Shi and Alt, 2016), providing feedback to speakers giving public presentations (Tanveer et al., 2016), interacting with large displays (Zhang, 2015), and, of course, playing games, both for entertainment (Marshall et al., 2016; Tang et al., 2015) and for rehabilitation (Huang et al., 2015; Wang et al., 2014; Muñoz et al., 2014). Information complication can brand analysis of Kinect interactions somewhat challenging equally several types of analyses are needed to excerpt objects, human activities, gestures, and even surroundings from Kinect information (Han et al., 2013). Toolkits such as Kinect Analysis (Nebeling et al., 2015) might simplify this analysis, only proper design and interpretation volition always be a cardinal component of any study using Kinect or like information. For a discussion of the challenges involved in using Kinect data in natural (non-lab) settings, see the LAB-IN-A-BOX sidebar below.

The Wii, smartphone accelerometers, smart watches, fettle monitors, and Kinect all provide examples of consumer technologies used in HCI inquiry. These commodity tools provide researchers with commercial-quality, ready-to-employ hardware and software that can be readily integrated into research, without requiring whatever of the engineering work required to collect data using habitation-grown or assembled components. For further discussion of smart watches and fitness trackers, come across Affiliate fourteen.

The need to transcend the limitations of commercial tools has inspired countless tinkerers and experimenters to develop and adapt novel motion and position sensing tools to both collect input from users and to measure action. The accessibility customs has been developing novel interfaces enabling users with reduced motor capacity to control computers since at the 1970s (Meiselwitz et al., 2010). Other recent efforts have involved the development of any number of innovative sensors. Fiber eyes (Dunne et al., 2006b), flexible sensors (Demmans et al., 2007), and sensors mounted on chairs (Mutlu et al., 2007) have been used to assess posture. Cream sensors stitched into clothing can detect both respiration and shoulder and arm movements (Dunne et al., 2006a). Cycle rotation sensors' on wheelchairs can be used to collect movement information suitable for classification of dissimilar types of activeness (Ding et al., 2011). Ane study published in 2015 explored the use of a system for detecting magnetic radiation from electrical devices. Using an assortment of sensors worn on a wrist-band, this system collects and classifies data, identifying electric devices used past the wearer (Wang et al., 2015). Although the initial design is frequently somewhat cumbersome, these early on prototypes pave the way for futurity refinements that may themselves lead to commercial innovations. Other efforts might suggest novel uses of existing technology to collect otherwise unavailable information, such as the use of commercial Doppler radar devices to sense sleep patterns without placing sensors on the body (Rahman et al., 2015).

These custom sensing approaches might crave help from engineers and signal-processing efforts not necessarily found in HCI enquiry teams, just the broad possibilities for innovation and insight tin ofttimes be well worth the endeavour.

Motility and position-sensing devices take many potential applications in HCI research, from assessing everyday activity such as posture, to studying action while using a system, to forming the basis for new input modalities. Although custom-designed sensors will likely exist the approach of pick to those with the engineering adequacy who are truly interested in pushing the envelope, the availability of cheaper and smaller sensors places these tools within the accomplish of many HCI researchers.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128053904000133

Delay in Digital Controllers

George Ellis , in Control System Design Guide (Quaternary Edition), 2012

iv.ii.3 Velocity Estimation Delay

The 3rd form of delay is caused past estimating velocity from position. Only move-control systems that rely on position sensors are discipline to this delay. Well-nigh controllers are designed for single-integrating plants, such as those described in Table 2.2. Motion controllers control a double-integrating found because they apply torque, simply they normally measure position rather than velocity. (Note that motion controllers relying on tachometer feedback do not suffer from velocity-estimation delay.) The controller usually forms velocity as the departure of the ii virtually recent positions: VNorth ≈ (PN   PN ane)/T, where VDue north and PN are the current velocity and position and PN 1 is the position from the previous sample. This estimation generates additional phase lag equivalent to a sample-and-hold. Consider that the divergence is formed by a combination of new data (PN ) and data i sample former (PNorth i) so that the boilerplate age of the data is one-half of the sample interval. This filibuster is identical to that generated by the sample-and-agree (Equation iv.1):

(4.3) T VEL EST ( due south ) 0 dB ( 180 × F × T Southward A M P 50 E ) °

Velocity estimation delay can be reduced by the inverse trapezoidal method (Equation five.28).

Read full affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9780123859204000047

Paradigm Processing

Rama Chellappa , Azriel Rosenfeld , in Encyclopedia of Concrete Science and Technology (Third Edition), 2003

VIII.C Stereomapping and Range Sensing

Let P be a bespeak in a scene and let P 1 and P 2 be the points corresponding to P in two images obtained from 2 known sensor positions. Allow the "lens centers" in these two positions exist 50 one and L ii, respectively. By the geometry of optical imaging, we know that P must lie on the line P one 50 1 (in infinite) and also on the line P ii L two; thus its position in infinite is completely adamant.

The difficulty in automating this process of stereomapping is that it is difficult to determine which pairs of image points stand for to the same scene point. (In fact, some scene points may be visible in ane paradigm but hidden in the other.) Given a point P one in one image, we can try to find the corresponding signal P ii in the other epitome past matching a neighborhood of P i with the second image. (We need not compare information technology with the unabridged paradigm; since the camera displacement is known, P 2 must lie on a known line in the 2nd image.) If the neighborhood used is too big, geometric distortion may brand it incommunicable to find a skilful friction match; simply if it is too pocket-size, there will be many imitation matches, and in a featureless region of the paradigm, it will be impossible to notice unambiguous (sharp) matches. Thus the matching approach will yield at best a sparse set of reasonably practiced, reasonably abrupt matches. We can verify the consistency of these matches past checking that the resulting positions of the scene points in space lie on a smoothen surface or a fix of such surfaces. In item, if the matches come from points that lie forth an edge in an image, we tin cheque that the resulting spatial positions lie on a smooth space bend.

A stiff theoretical basis for this arroyo evolved around the midseventies. Since then, advances accept been made in interpolating the depth estimates obtained at positions of matched epitome features using surface interpolation techniques and hierarchical feature-based matching schemes, and dense estimates take been obtained using grey-level matching guided by imitation annealing. Although these approaches accept contributed to a greater understanding of the trouble of depth recovery using two cameras, much more tangible benefits accept been reaped using a larger number of cameras. By arranging them in an array pattern, uncomplicated sum of squared difference-based schemes are able to produce dense depth estimates in real time. Using large numbers of cameras (in backlog of fifty), new applications in virtual reality, 3-D modeling, and calculator-assisted surgery have become feasible.

Consistent with developments in multiscale assay, stereo mapping has benefited from multiscale characteristic-based matching techniques. Besides, fake annealing and neural networks have been used for depth estimation using two or more than images.

Another arroyo to determining the spatial positions of the points in a scene is to use patterned illumination. For example, suppose that we illuminate the scene with a aeroplane of light Π, so that just those scene points that lie in Π are illuminated, and the rest are dark. In an paradigm of the scene, any visible scene point P (giving rise to epitome point P 1) must lie on the line P 1 L 1; since P must also lie in Π, it must be at the intersection of P 1 L i and Π, so that its position in space is completely determined. Nosotros can obtain complete 3-D data about the scene by moving Π through a prepare of positions so equally to illuminate every visible scene point, or we tin use coded illumination in which the rays in each plane are distinctive (eastward.one thousand., by their colors). A variety of "range sensing" techniques based on patterned illumination has been developed. Still another arroyo to range sensing is to illuminate the scene, one betoken at a fourth dimension, with a pulse of light and to measure the fourth dimension interval (due east.yard., the stage shift) between the transmitted and the reflected pulses, thus obtaining the range to that scene indicate direct, as in radar.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105008413