Reduced battery life risk in critical IoT devices

[ad_1]

JThe Internet of Things (IoT) is rapidly being adopted for use in mission-critical applications for several reasons. First, the IoT now incorporates increasingly sophisticated technologies, such as artificial intelligence (AI), augmented reality, edge computing, sensor fusion, and mesh networks to solve problems of difficulty and of increasing importance. Second, as recent supply chain challenges have demonstrated, the margins for error and delay are slim at best. Third, the demand for increased health care, combined with the scarcity of resources, means that many medical services must reduce their costs and become more efficient. Finally, the desire to conserve resources means devices need to last longer and operate more reliably.

These trends present many business opportunities in areas that serve human health, safety, food production, environmental protection, and other key aspects of human flourishing. As technical challenges increase, each of the 5 Cs + 1 Cs of IoT becomes more important. Some of them may use artificial intelligence (AI) as part of the solution.

The 5 Cs + 1 Cs of the IoT

The term 5 Cs + IC of IoT refers to key characteristics that apply to all types of devices that use IoT to transmit and receive data, as follows:

  • Connectivity— Refers to a device’s ability to create and maintain reliable connections, even while roaming. Critical applications cannot accept delayed or lost data.
  • Compliance— Means that a device meets the regulatory requirements for market access. Compliance issues should not delay implementation or lead to a product recall.
  • Coexistence—Ability of a device to operate properly in congested RF bands. Critical devices must avoid packet loss, data corruption, and attempts that drain the battery.
  • Continuity—The ability to a device to operate without battery failure. Manufacturers must ensure long battery life, especially in implanted devices and emergencies where mains power is not available.
  • cyber security—IoT devices and infrastructure must be strong and resilient against cyber threats, including denial of service, compromised data, or interception of sensitive information. Product development teams can use AI to simulate a variety of exploit-based malware techniques that have exposed vulnerabilities in the past.
  • Client experience— Ideally, this means customers get an optimized, seamless experience with intuitive apps that work seamlessly end-to-end across multiple platforms. The challenge is that the number of possible paths through a series of related software applications is virtually unlimited, far too many to test exhaustively. Fortunately, AI can again guide automated test systems based on when code was recently added, the number of defects found in particular sections of code, and other relevant factors.

Growing demands on device batteries

Ensuring that IoT devices sufficiently meet each of these key characteristics increases battery requirements. Previously, a simple sensor could wake up, take a few measurements, transmit data to a hub or access point, and then go back to sleep. Today’s critical devices may incorporate multiple sensors, microcontrollers, digital processors, six-axis accelerometers, sensor fusion logic, voltage converters, power management systems, image processors, microphones, multiple radios , memory, encryption processors and other hardware components that drain battery life.

Additionally, operating environments are increasingly harsh, with temperature changes, irregular duty cycles and an electromagnetically crowded spectrum. Some operate in places that are difficult or dangerous to reach, and some operate inside the bodies of animals or humans. These factors place unprecedented demands on device batteries.

For medical devices, the quality of a device’s battery life often has health implications. Even in non-critical applications, premature failure can result in complaints during post-market surveillance monitored by regulatory agencies. Complaints that become excessive or increase risk to the patient can result in huge costs for the manufacturer.

Challenges for battery testing during product development

Battery testing presents several challenges during product development. Using real batteries may seem ideal, but there are limitations associated with real batteries.

Difficulty determining initial state of charge

Batteries can be fully charged at the factory, but as soon as they leave the charger they begin to discharge due to internal resistance. This self-discharge rate varies by battery technology; lithium-ion cells have a lower self-discharge rate than nickel-cadmium (NiCad) or nickel-metal hydride (NiMH) batteries.1 The rate of discharge varies with time and temperature, and this loss of performance is sometimes referred to as calendar fade.2 An engineer cannot assume that a new battery is at precisely 100% state of charge.

Variation within and between manufacturing batches

Like any manufacturing process, battery manufacturing has normal variations. Even within a given batch or date code, batteries will vary. There is often additional variability between different plants. That doesn’t mean manufacturers release batteries that are out of specification, but the tolerances are there for a reason. Battery discharge tests should be performed with batteries of different lots acquired at different times.

Variation due to recharge

A recharged battery has different discharge characteristics than a new battery. This effect, known as cycle fading, is due to mechanisms that affect the cathode or the anode. For example, in a lithium secondary cell, the anode ages due to graphite exfoliation, electrolyte decomposition, and lithium plating which leads to corrosion. Similarly, the cathode experiences aging due to several factors, including binder decomposition, oxidation of conductive particles, micro-cracking, and structural disorder.3

You can limit this variability by making sure the battery is fully charged and by using a battery cycler that conditions the battery from fully discharged to fully charged.

The importance of battery emulation

Some test engineers attempt to use a basic DC power supply to emulate a battery for battery discharge testing. This can be accurate, but only if the engineer uses a specialized battery emulator that models its output based on a battery profile. A standard power supply does not work like a battery, but a battery emulator uses specialized features such as programmable output resistance and fast transient response to emulate a real battery.

For example, a test engineer can use an advanced battery test and emulation solution to profile and emulate battery performance quickly and easily. The engineer can then use this solution to charge or discharge a battery of any chemistry to create a battery model of up to 200 points, each point including open circuit voltage (Voc), series resistance (Ri ) and battery charge status. (SoC).

Figure 1: The first rows of a battery model with 200 data points

Battery emulation is especially important when the test engineer changes the hardware configuration or firmware program of the device. Without consistent battery emulation, the engineer cannot know if the variation in shutdown time is due to intentional changes or to the variability of the batteries used to perform the shutdown test, as described above. Since battery life is closely related to the other “C’s” of the IoT, any AI techniques that improve the overall functioning of the device can also have a positive impact on battery life. battery.

By using such a profile with a battery emulator, the engineer can avoid having to use an actual battery, eliminating the associated uncertainty and variability. Additionally, a battery emulator allows the user to quickly tune the SoC to any point in the model at the start of a test.

For example, the engineer may want to see how the device behaves towards the end of battery life by starting the test with the SoC set to 15%. To use a real battery, one would have to discharge a real battery to 15% and verify that it was at that level. This poses at least three challenges. First, a battery would need to be discharged to the desired SoC. This can take hours on a real battery, but one can tune the SoC on a battery emulator in a fraction of a second. Second, the engineer should somehow determine the SoC of the battery. Third, each time you charge or discharge a battery, you change the behavior of the battery due to the cycle fading mentioned earlier.

Figure 2: Example display of advanced battery test and emulation software

Use of results

The engineer can use the information at states of charge near the end of battery life to thoughtfully degrade device performance and extend device runtime. For example, the engineer can choose to transmit data half as often as usual. In addition to extending battery life, it would alert the user that the battery is running low. The engineer can also decide to transmit only the minimum and maximum data values ​​or to transmit only when the values ​​change by more than a certain amount. The engineer could also choose to decline firmware updates once the SoC drops below a small percentage. There would be no point in causing a device’s battery to fail in the middle of a firmware update.

Conclusion

Battery life is becoming increasingly important as the IoT moves into more critical applications, including connected medical devices. Using real batteries to test these devices leads to many problems during the product development process. Test engineers can use advanced battery emulation and test solutions to create detailed, high-resolution battery profiles. They can then use these profiles to emulate the battery and get quick feedback on battery performance at different states of charge, then modify firmware to optimize device performance.

References

  1. https://ecorevenergy.com/secondary-lithium-battery
  2. https://www.mpoweruk.com/life.htm
  3. https://www.mpoweruk.com/failure_modes.htm#mechanisms

[ad_2]
Source link