!!! DEVELOPMENT MODE !!!
Το τεχνολογικό μέλλον που έρχεται.
- DIOMEDESGR
- Δημοσιεύσεις: 6665
- Εγγραφή: 24 Οκτ 2018, 19:46
- Phorum.gr user: DIOMEDESGR
Re: Το τεχνολογικό μέλλον που έρχεται.
"Patriotism is the last refuge of a scoundrel." Samuel Johnson, 1775.
- DIOMEDESGR
- Δημοσιεύσεις: 6665
- Εγγραφή: 24 Οκτ 2018, 19:46
- Phorum.gr user: DIOMEDESGR
Re: Το τεχνολογικό μέλλον που έρχεται.
"Patriotism is the last refuge of a scoundrel." Samuel Johnson, 1775.
- DIOMEDESGR
- Δημοσιεύσεις: 6665
- Εγγραφή: 24 Οκτ 2018, 19:46
- Phorum.gr user: DIOMEDESGR
Re: Το τεχνολογικό μέλλον που έρχεται.
"Patriotism is the last refuge of a scoundrel." Samuel Johnson, 1775.
- DIOMEDESGR
- Δημοσιεύσεις: 6665
- Εγγραφή: 24 Οκτ 2018, 19:46
- Phorum.gr user: DIOMEDESGR
Re: Το τεχνολογικό μέλλον που έρχεται.
"Patriotism is the last refuge of a scoundrel." Samuel Johnson, 1775.
- DIOMEDESGR
- Δημοσιεύσεις: 6665
- Εγγραφή: 24 Οκτ 2018, 19:46
- Phorum.gr user: DIOMEDESGR
Re: Το τεχνολογικό μέλλον που έρχεται.
"Patriotism is the last refuge of a scoundrel." Samuel Johnson, 1775.
- DIOMEDESGR
- Δημοσιεύσεις: 6665
- Εγγραφή: 24 Οκτ 2018, 19:46
- Phorum.gr user: DIOMEDESGR
Re: Το τεχνολογικό μέλλον που έρχεται.
"Patriotism is the last refuge of a scoundrel." Samuel Johnson, 1775.
- DIOMEDESGR
- Δημοσιεύσεις: 6665
- Εγγραφή: 24 Οκτ 2018, 19:46
- Phorum.gr user: DIOMEDESGR
Re: Το τεχνολογικό μέλλον που έρχεται.
"Patriotism is the last refuge of a scoundrel." Samuel Johnson, 1775.
Re: Το τεχνολογικό μέλλον που έρχεται.
The Pentagon has a laser that can identify people from a distance—by their heartbeat
Everyone’s heart is different. Like the iris or fingerprint, our unique cardiac signature can be used as a way to tell us apart. Crucially, it can be done from a distance.
It’s that last point that has intrigued US Special Forces. Other long-range biometric techniques include gait analysis, which identifies someone by the way he or she walks. This method was supposedly used to identify an infamous ISIS terrorist before a drone strike. But gaits, like faces, are not necessarily distinctive. An individual’s cardiac signature is unique, though, and unlike faces or gait, it remains constant and cannot be altered or disguised.
Long-range detection
A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”
Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat).
The most common way of carrying out remote biometric identification is by face recognition. But this needs good, frontal view of the face, which can be hard to obtain, especially from a drone. Face recognition may also be confused by beards, sunglasses, or headscarves.
Cardiac signatures are already used for security identification. The Canadian company Nymi has developed a wrist-worn pulse sensor as an alternative to fingerprint identification. The technology has been trialed by the Halifax building society in the UK.
Jetson extends this approach by adapting an off-the shelf device that is usually used to check vibration from a distance in structures such as wind turbines. For Jetson, a special gimbal was added so that an invisible, quarter-size laser spot could be kept on a target. It takes about 30 seconds to get a good return, so at present the device is only effective where the subject is sitting or standing.
Better than face recognition
Remaly’s team then developed algorithms capable of extracting a cardiac signature from the laser signals. He claims that Jetson can achieve over 95% accuracy under good conditions, and this might be further improved. In practice, it’s likely that Jetson would be used alongside facial recognition or other identification methods.
Wenyao Xu of the State University of New York at Buffalo has also developed a remote cardiac sensor, although it works only up to 20 meters away and uses radar. He believes the cardiac approach is far more robust than facial recognition. “Compared with face, cardiac biometrics are more stable and can reach more than 98% accuracy,” he says.
One glaring limitation is the need for a database of cardiac signatures, but even without this the system has its uses. For example, an insurgent seen in a group planting an IED could later be positively identified from a cardiac signature, even if the person’s name and face are unknown. Biometric data is also routinely collected by US armed forces in Iraq and Afghanistan, so cardiac data could be added to that library.
In the longer run, this technology could find many more uses, its developers believe. For example, a doctor could scan for arrythmias and other conditions remotely, or hospitals could monitor the condition of patients without having to wire them up to machines.
https://www.technologyreview.com/s/6138 ... =tr_social
Everyone’s heart is different. Like the iris or fingerprint, our unique cardiac signature can be used as a way to tell us apart. Crucially, it can be done from a distance.
It’s that last point that has intrigued US Special Forces. Other long-range biometric techniques include gait analysis, which identifies someone by the way he or she walks. This method was supposedly used to identify an infamous ISIS terrorist before a drone strike. But gaits, like faces, are not necessarily distinctive. An individual’s cardiac signature is unique, though, and unlike faces or gait, it remains constant and cannot be altered or disguised.
Long-range detection
A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”
Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat).
The most common way of carrying out remote biometric identification is by face recognition. But this needs good, frontal view of the face, which can be hard to obtain, especially from a drone. Face recognition may also be confused by beards, sunglasses, or headscarves.
Cardiac signatures are already used for security identification. The Canadian company Nymi has developed a wrist-worn pulse sensor as an alternative to fingerprint identification. The technology has been trialed by the Halifax building society in the UK.
Jetson extends this approach by adapting an off-the shelf device that is usually used to check vibration from a distance in structures such as wind turbines. For Jetson, a special gimbal was added so that an invisible, quarter-size laser spot could be kept on a target. It takes about 30 seconds to get a good return, so at present the device is only effective where the subject is sitting or standing.
Better than face recognition
Remaly’s team then developed algorithms capable of extracting a cardiac signature from the laser signals. He claims that Jetson can achieve over 95% accuracy under good conditions, and this might be further improved. In practice, it’s likely that Jetson would be used alongside facial recognition or other identification methods.
Wenyao Xu of the State University of New York at Buffalo has also developed a remote cardiac sensor, although it works only up to 20 meters away and uses radar. He believes the cardiac approach is far more robust than facial recognition. “Compared with face, cardiac biometrics are more stable and can reach more than 98% accuracy,” he says.
One glaring limitation is the need for a database of cardiac signatures, but even without this the system has its uses. For example, an insurgent seen in a group planting an IED could later be positively identified from a cardiac signature, even if the person’s name and face are unknown. Biometric data is also routinely collected by US armed forces in Iraq and Afghanistan, so cardiac data could be added to that library.
In the longer run, this technology could find many more uses, its developers believe. For example, a doctor could scan for arrythmias and other conditions remotely, or hospitals could monitor the condition of patients without having to wire them up to machines.
https://www.technologyreview.com/s/6138 ... =tr_social
Re: Το τεχνολογικό μέλλον που έρχεται.
What is machine learning?
Machine-learning algorithms find and apply patterns in data. And they pretty much run the world.
Machine-learning algorithms are responsible for the vast majority of the artificial intelligence advancements and applications you hear about. (For more background, checMachine-learning algorithms use statistics to find patterns in massive* amounts of data. And data, here, encompasses a lot of things—numbers, words, images, clicks, what have you. If it can be digitally stored, it can be fed into a machine-learning algorithm.
Machine learning is the process that powers many of the services we use today—recommendation systems like those on Netflix, YouTube, and Spotify; search engines like Google and Baidu; social-media feeds like Facebook and Twitter; voice assistants like Siri and Alexa. The list goes on.
In all of these instances, each platform is collecting as much data about you as possible—what genres you like watching, what links you are clicking, which statuses you are reacting to—and using machine learning to make a highly educated guess about what you might want next. Or, in the case of a voice assistant, about which words match best with the funny sounds coming out of your mouth.
Frankly, this process is quite basic: find the pattern, apply the pattern. But it pretty much runs the world. That’s in big part thanks to an invention in 1986, courtesy of Geoffrey Hinton, today known as the father of deep learning.
What is deep learning?
Deep learning is machine learning on steroids: it uses a technique that gives machines an enhanced ability to find—and amplify—even the smallest patterns. This technique is called a deep neural network—deep because it has many, many layers of simple computational nodes that work together to munch through data and deliver a final result in the form of the prediction.
What are neural networks?
Neural networks were vaguely inspired by the inner workings of the human brain. The nodes are sort of like neurons, and the network is sort of like the brain itself. (For the researchers among you who are cringing at this comparison: Stop pooh-poohing the analogy. It’s a good analogy.) But Hinton published his breakthrough paper at a time when neural nets had fallen out of fashion. No one really knew how to train them, so they weren’t producing good results. It took nearly 30 years for the technique to make a comeback. And boy, did it make a comeback.
What is supervised learning?
One last thing you need to know: machine (and deep) learning comes in three flavors: supervised, unsupervised, and reinforcement. In supervised learning, the most prevalent, the data is labeled to tell the machine exactly what patterns it should look for. Think of it as something like a sniffer dog that will hunt down targets once it knows the scent it’s after. That’s what you’re doing when you press play on a Netflix show—you’re telling the algorithm to find similar shows.
What is unsupervised learning?
In unsupervised learning, the data has no labels. The machine just looks for whatever patterns it can find. This is like letting a dog smell tons of different objects and sorting them into groups with similar smells. Unsupervised techniques aren’t as popular because they have less obvious applications. Interestingly, they have gained traction in cybersecurity.
What is reinforcement learning?
Lastly, we have reinforcement learning, the latest frontier of machine learning. A reinforcement algorithm learns by trial and error to achieve a clear objective. It tries out lots of different things and is rewarded or penalized depending on whether its behaviors help or hinder it from reaching its objective. This is like giving and withholding treats when teaching a dog a new trick. Reinforcement learning is the basis of Google’s AlphaGo, the program that famously beat the best human players in the complex game of Go.
That’s it. That's machine learning. Now check out the flowchart above for a final recap.
*Note: Okay, there are technically ways to perform machine learning on smallish amounts of data, but you typically need huge piles of it to achieve good results.
https://www.technologyreview.com/s/6124 ... ce=twitter
Machine-learning algorithms find and apply patterns in data. And they pretty much run the world.
Machine-learning algorithms are responsible for the vast majority of the artificial intelligence advancements and applications you hear about. (For more background, checMachine-learning algorithms use statistics to find patterns in massive* amounts of data. And data, here, encompasses a lot of things—numbers, words, images, clicks, what have you. If it can be digitally stored, it can be fed into a machine-learning algorithm.
Machine learning is the process that powers many of the services we use today—recommendation systems like those on Netflix, YouTube, and Spotify; search engines like Google and Baidu; social-media feeds like Facebook and Twitter; voice assistants like Siri and Alexa. The list goes on.
In all of these instances, each platform is collecting as much data about you as possible—what genres you like watching, what links you are clicking, which statuses you are reacting to—and using machine learning to make a highly educated guess about what you might want next. Or, in the case of a voice assistant, about which words match best with the funny sounds coming out of your mouth.
Frankly, this process is quite basic: find the pattern, apply the pattern. But it pretty much runs the world. That’s in big part thanks to an invention in 1986, courtesy of Geoffrey Hinton, today known as the father of deep learning.
What is deep learning?
Deep learning is machine learning on steroids: it uses a technique that gives machines an enhanced ability to find—and amplify—even the smallest patterns. This technique is called a deep neural network—deep because it has many, many layers of simple computational nodes that work together to munch through data and deliver a final result in the form of the prediction.
What are neural networks?
Neural networks were vaguely inspired by the inner workings of the human brain. The nodes are sort of like neurons, and the network is sort of like the brain itself. (For the researchers among you who are cringing at this comparison: Stop pooh-poohing the analogy. It’s a good analogy.) But Hinton published his breakthrough paper at a time when neural nets had fallen out of fashion. No one really knew how to train them, so they weren’t producing good results. It took nearly 30 years for the technique to make a comeback. And boy, did it make a comeback.
What is supervised learning?
One last thing you need to know: machine (and deep) learning comes in three flavors: supervised, unsupervised, and reinforcement. In supervised learning, the most prevalent, the data is labeled to tell the machine exactly what patterns it should look for. Think of it as something like a sniffer dog that will hunt down targets once it knows the scent it’s after. That’s what you’re doing when you press play on a Netflix show—you’re telling the algorithm to find similar shows.
What is unsupervised learning?
In unsupervised learning, the data has no labels. The machine just looks for whatever patterns it can find. This is like letting a dog smell tons of different objects and sorting them into groups with similar smells. Unsupervised techniques aren’t as popular because they have less obvious applications. Interestingly, they have gained traction in cybersecurity.
What is reinforcement learning?
Lastly, we have reinforcement learning, the latest frontier of machine learning. A reinforcement algorithm learns by trial and error to achieve a clear objective. It tries out lots of different things and is rewarded or penalized depending on whether its behaviors help or hinder it from reaching its objective. This is like giving and withholding treats when teaching a dog a new trick. Reinforcement learning is the basis of Google’s AlphaGo, the program that famously beat the best human players in the complex game of Go.
That’s it. That's machine learning. Now check out the flowchart above for a final recap.
*Note: Okay, there are technically ways to perform machine learning on smallish amounts of data, but you typically need huge piles of it to achieve good results.
https://www.technologyreview.com/s/6124 ... ce=twitter
Re: Το τεχνολογικό μέλλον που έρχεται.
Here are 10 ways AI could help fight climate change
Some of the biggest names in AI research have laid out a road map suggesting how machine learning can help save our planet and humanity from imminent peril.
The report covers possible machine-learning interventions in 13 domains, from electricity systems to farms and forests to climate prediction. Within each domain, it breaks out the contributions for various subdisciplines within machine learning, including computer vision, natural-language processing, and reinforcement learning.
Recommendations are also divided into three categories: “high leverage” for problems well suited to machine learning where such interventions may have an especially great impact; “long-term” for solutions that won’t have payoffs until 2040; and “high risk” for pursuits that have less certain outcomes, either because the technology isn’t mature or because not enough is known to assess the consequences. Many of the recommendations also summarize existing efforts that are already happening but not yet at scale.
The report’s compilation was led by David Rolnick, a postdoctoral fellow at the University of Pennsylvania, and advised by several high-profile figures, including Andrew Ng, the cofounder of Google Brain and a leading AI entrepreneur and educator; Demis Hassabis, the founder and CEO of DeepMind; Jennifer Chayes, the managing director of Microsoft Research; and Yoshua Bengio, who recently won the Turing Award for his contributions to the field. While the researchers offer a very comprehensive list of some of the major areas where machine learning can contribute, they also note that it is not a silver bullet. Ultimately, policy will be the main driver for effective large-scale climate action.
Here are just 10 of the “high leverage” recommendations from the report. Read the full version of it here.
1. Improve predictions of how much electricity we need
If we’re going to rely on more renewable energy sources, utilities will need better ways of predicting how much energy is needed, in real time and over the long term. Algorithms already exist that can forecast energy demand, but they could be improved by taking into account finer local weather and climate patterns or household behavior. Efforts to make the algorithms more explainable could also help utility operators interpret their outputs and use them in scheduling when to bring renewable sources online.
2. Discover new materials
Scientists need to develop materials that store, harvest, and use energy more efficiently, but the process of discovering new materials is typically slow and imprecise. Machine learning can accelerate things by finding, designing, and evaluating new chemical structures with the desired properties. This could, for example, help create solar fuels, which can store energy from sunlight, or identify more efficient carbon dioxide absorbents or structural materials that take a lot less carbon to create. The latter materials could one day replace steel and cement—the production of which accounts for nearly 10% of all global greenhouse-gas emissions.
3. Optimize how freight is routed
Shipping goods around the world is a complex and often highly inefficient process that involves the interplay of different shipment sizes, different types of transportation, and a changing web of origins and destinations. Machine learning could help find ways to bundle together as many shipments as possible and minimize the total number of trips. Such a system would also be more resilient to transportation disruptions.
4. Lower barriers to electric-vehicle adoption
Electric vehicles, a key strategy for decarbonizing transportation, face several adoption challenges where machine learning could help. Algorithms can improve battery energy management to increase the mileage of each charge and reduce “range anxiety,” for example. They can also model and predict aggregate charging behavior to help grid operators meet and manage their load.
5. Help make buildings more efficient
Intelligent control systems can dramatically reduce a building’s energy consumption by taking weather forecasts, building occupancy, and other environmental conditions into account to adjust the heating, cooling, ventilation, and lighting needs in an indoor space. A smart building could also communicate directly with the grid to reduce how much power it is using if there’s a scarcity of low-carbon electricity supply at any given time.
6. Create better estimates of how much energy we are consuming
Many regions of the world have little to no data on their energy consumption and greenhouse-gas emissions, which can be a major obstacle to designing and implementing effective mitigation strategies. Computer vision techniques can extract building footprints and characteristics from satellite imagery to feed machine-learning algorithms that can estimate city-level energy consumption. The same techniques could also identify which buildings should be retrofitted to maximize their efficiency.
7. Optimize supply chains
In the same way that machine learning can optimize shipping routes, it can also minimize inefficiencies and carbon emissions in the supply chains of the food, fashion, and consumer goods industries. Better predictions of supply and demand should significantly reduce production and transportation waste, while targeted recommendations for low-carbon products could encourage more environmentally friendly consumption.
8. Make precision agriculture possible at scale
Much of modern-day agriculture is dominated by monoculture, the practice of producing a single crop on a large swath of land. This approach makes it easier for farmers to manage their fields with tractors and other basic automated tools, but it also strips the soil of nutrients and reduces its productivity. As a result, many farmers rely heavily on nitrogen-based fertilizers, which can convert into nitrous oxide, a greenhouse gas 300 times more potent than carbon dioxide. Robots run on machine-learning software could help farmers manage a mix of crops more effectively at scale, while algorithms could help farmers predict what crops to plant when, regenerating the health of their land and reducing the need for fertilizers.
9. Improve deforestation tracking
Deforestation contributes to roughly 10% of global greenhouse-gas emissions, but tracking and preventing it is usually a tedious manual process that takes place on the ground. Satellite imagery and computer vision can automatically analyze the loss of tree cover at a much greater scale, and sensors on the ground, combined with algorithms for detecting chainsaw sounds, can help local law enforcement stop illegal activity.
10. Nudge consumers to change how we shop
Techniques that advertisers have successfully used to target consumers can be used to help us behave in more environmentally aware ways. Consumers could receive tailored interventions to promote their enrollment in energy saving programs, for example.
https://www.technologyreview.com/s/6138 ... =tr_social
Some of the biggest names in AI research have laid out a road map suggesting how machine learning can help save our planet and humanity from imminent peril.
The report covers possible machine-learning interventions in 13 domains, from electricity systems to farms and forests to climate prediction. Within each domain, it breaks out the contributions for various subdisciplines within machine learning, including computer vision, natural-language processing, and reinforcement learning.
Recommendations are also divided into three categories: “high leverage” for problems well suited to machine learning where such interventions may have an especially great impact; “long-term” for solutions that won’t have payoffs until 2040; and “high risk” for pursuits that have less certain outcomes, either because the technology isn’t mature or because not enough is known to assess the consequences. Many of the recommendations also summarize existing efforts that are already happening but not yet at scale.
The report’s compilation was led by David Rolnick, a postdoctoral fellow at the University of Pennsylvania, and advised by several high-profile figures, including Andrew Ng, the cofounder of Google Brain and a leading AI entrepreneur and educator; Demis Hassabis, the founder and CEO of DeepMind; Jennifer Chayes, the managing director of Microsoft Research; and Yoshua Bengio, who recently won the Turing Award for his contributions to the field. While the researchers offer a very comprehensive list of some of the major areas where machine learning can contribute, they also note that it is not a silver bullet. Ultimately, policy will be the main driver for effective large-scale climate action.
Here are just 10 of the “high leverage” recommendations from the report. Read the full version of it here.
1. Improve predictions of how much electricity we need
If we’re going to rely on more renewable energy sources, utilities will need better ways of predicting how much energy is needed, in real time and over the long term. Algorithms already exist that can forecast energy demand, but they could be improved by taking into account finer local weather and climate patterns or household behavior. Efforts to make the algorithms more explainable could also help utility operators interpret their outputs and use them in scheduling when to bring renewable sources online.
2. Discover new materials
Scientists need to develop materials that store, harvest, and use energy more efficiently, but the process of discovering new materials is typically slow and imprecise. Machine learning can accelerate things by finding, designing, and evaluating new chemical structures with the desired properties. This could, for example, help create solar fuels, which can store energy from sunlight, or identify more efficient carbon dioxide absorbents or structural materials that take a lot less carbon to create. The latter materials could one day replace steel and cement—the production of which accounts for nearly 10% of all global greenhouse-gas emissions.
3. Optimize how freight is routed
Shipping goods around the world is a complex and often highly inefficient process that involves the interplay of different shipment sizes, different types of transportation, and a changing web of origins and destinations. Machine learning could help find ways to bundle together as many shipments as possible and minimize the total number of trips. Such a system would also be more resilient to transportation disruptions.
4. Lower barriers to electric-vehicle adoption
Electric vehicles, a key strategy for decarbonizing transportation, face several adoption challenges where machine learning could help. Algorithms can improve battery energy management to increase the mileage of each charge and reduce “range anxiety,” for example. They can also model and predict aggregate charging behavior to help grid operators meet and manage their load.
5. Help make buildings more efficient
Intelligent control systems can dramatically reduce a building’s energy consumption by taking weather forecasts, building occupancy, and other environmental conditions into account to adjust the heating, cooling, ventilation, and lighting needs in an indoor space. A smart building could also communicate directly with the grid to reduce how much power it is using if there’s a scarcity of low-carbon electricity supply at any given time.
6. Create better estimates of how much energy we are consuming
Many regions of the world have little to no data on their energy consumption and greenhouse-gas emissions, which can be a major obstacle to designing and implementing effective mitigation strategies. Computer vision techniques can extract building footprints and characteristics from satellite imagery to feed machine-learning algorithms that can estimate city-level energy consumption. The same techniques could also identify which buildings should be retrofitted to maximize their efficiency.
7. Optimize supply chains
In the same way that machine learning can optimize shipping routes, it can also minimize inefficiencies and carbon emissions in the supply chains of the food, fashion, and consumer goods industries. Better predictions of supply and demand should significantly reduce production and transportation waste, while targeted recommendations for low-carbon products could encourage more environmentally friendly consumption.
8. Make precision agriculture possible at scale
Much of modern-day agriculture is dominated by monoculture, the practice of producing a single crop on a large swath of land. This approach makes it easier for farmers to manage their fields with tractors and other basic automated tools, but it also strips the soil of nutrients and reduces its productivity. As a result, many farmers rely heavily on nitrogen-based fertilizers, which can convert into nitrous oxide, a greenhouse gas 300 times more potent than carbon dioxide. Robots run on machine-learning software could help farmers manage a mix of crops more effectively at scale, while algorithms could help farmers predict what crops to plant when, regenerating the health of their land and reducing the need for fertilizers.
9. Improve deforestation tracking
Deforestation contributes to roughly 10% of global greenhouse-gas emissions, but tracking and preventing it is usually a tedious manual process that takes place on the ground. Satellite imagery and computer vision can automatically analyze the loss of tree cover at a much greater scale, and sensors on the ground, combined with algorithms for detecting chainsaw sounds, can help local law enforcement stop illegal activity.
10. Nudge consumers to change how we shop
Techniques that advertisers have successfully used to target consumers can be used to help us behave in more environmentally aware ways. Consumers could receive tailored interventions to promote their enrollment in energy saving programs, for example.
https://www.technologyreview.com/s/6138 ... =tr_social