fbpx

Conference Recap: IEEE LANMAN

Conference Recap: IEEE LANMAN
Reading Time: 5 minutes
The Latest Research on IoT and Cloud Computing

I recently had the opportunity to attend a research conference focused on projects related to computer networking, the IEEE LANMAN conference hosted at George Washington University. Researchers travelled from all over the world to showcase their projects, resulting in a three-day experience comprised of six speaker sessions, a panel, and a demo/poster session. I was particularly drawn to speaker sessions that shared new research in the field of the Internet of Things (IoT), given how much work we do with IoT at Mission Data.

IoT’s Changing Architecture

Cloud computing has been an exciting development because it allows people to access information generated by high-powered devices without directly needing to be at the machine. For example, my phone might not be able to store high-definition video directly on the device, but I can access the videos stored on an external server from a hosting service such as YouTube. With the IoT, a typical sensor has just enough power to take measurements and send that data elsewhere, but it does not generally need the onboard processing power to do any kind of computation. As a result, the data is sent to “the cloud,” where more computing power and storage is available. If there are network constraints or a large collection of sensors, usually the sensors will first send their data to a gateway, which can act as a bridge between cloud servers and the sensors.

In a traditional cloud computing architecture, this gateway is doing very little data processing, as the idea is to have it all done on the more powerful cloud servers in somewhere like a data center. One of the realizations, however, is that because there is a delay associated with sending data from the gateway to cloud servers, it is sometimes more efficient to add computing power to the gateway itself, such that the cloud server is not needed or is not as heavily used. This type of processing is called edge computing, where gateways are turned into “edge nodes”, and it is a growing trend in cloud computing.

This has lead to the concept of “fog computing”, where instead of having a central data center, all of the processing and storage is spread across a series of edge nodes. The advantage to this is that more processing power is available closer to where the data is generated, allowing for faster response times. For example, if a self-driving car requires cloud computing to process sensor data such as radar or video streams, then the car needs incredibly fast response times to avoid crashing. A traditional cloud computing architecture might take too long depending on the load and location of the car, but because fog computing is more distributed, it can maintain faster response times.

One of the first projects discussed at LANMAN was a proof-of-concept fog computing platform called INPUT. This platform is designed for telecommunication companies to provide edge computing services to end users, with easy virtualization and monitoring features for those who are deploying it. Physical hardware is added to the network as virtual images, meaning that it is easy to both start and stop services as well as move them from node to node. Furthermore, there is minimal delay when services are created or migrated to other nodes, meaning service for end users is almost never disrupted. While there is no current plan for widespread deployment, it is a part of the Horizon 2020 initiative, a research program in Europe aimed at creating innovative technical solutions that can hopefully be converted from research projects into commercial ventures.

Crowdsourcing the Internet of Things

While fog computing tries to distribute the computing power away from a central data center, another talk I heard about at LANMAN addressed a different issue — sensor overuse decreasing a device’s lifespan. This problem arises when an array of sensors need to forward data in a chain, resulting in a small number of devices having to forward large amounts of data, because they are towards the end of the chain. For example, in the scenario discussed by the presenters of the talk, a city deployed a series of sensors to monitor noise pollution. In the process of getting the data from the sensors, it was forwarded through a series of sinks, or nodes specifically designed for data forwarding. One problem with this design is that depending on how much data exists and how many sensors are sending to the same sink, a sink may become overloaded, which can decrease the lifespan of the device. Additionally, the energy distribution across this kind of network is usually uneven, due to differing amounts of devices and traffic. 

Smartphones are already ubiquitous, so why not take advantage of them?

To solve this problem, the group of researchers behind this talk created the concept of a “mobile sink”, where people who download an app to their smartphone can connect to the IoT network and act as a sink node that forwards data. By writing an algorithm to determine which sinks are overloaded, the researchers can then intelligently decide where in the network the traffic should be offloaded to a mobile phone. Furthermore, this algorithm can also factor in energy consumption, and locations in the network where too much energy is being used can also be offloaded to a mobile phone.

After testing their algorithm and mobile app, the researchers concluded that adding mobile sinks to a sensor network would definitively increase the lifespan of the network. I like this concept, because it solves a novel problem that the creation of “smart cities” of the future will have, by maximizing the lifespan of the network while minimizing its cost. Smartphones are already ubiquitous, so why not take advantage of them? One possible problem I foresee with a crowdsourced solution like this is getting a user base — offloading the data forwarding to mobile phones requires there to be available mobile sinks, and for that to happen, people living in a smart city must be comfortable with the concept. It is not entirely clear how many devices would be required for crowdsourcing to be an effective solution, but the public’s attitude towards this concept will be an interesting development. Will people be comfortable with offering their own personal network connections (WiFi or cell service) to benefit the greater community?

Concluding Thoughts

The IoT is an exciting area of research because it has so many potential opportunities to improve and optimize the way we go about our lives. For its expansion to continue to be viable, however, research into not just applications of IoT but also the architecture behind it will continue to become more and more important, such that networks can continue to become more powerful and cost-efficient. This conference focused solely on the technology of the problem, but the solutions that are created need to be cost-efficient as well to be truly viable on a large-scale.