Category

Technology

How Artificial Intelligence Robotics Market Is Gaining Growth Prospects

AMA recently introduced Artificial Intelligence Robotics Comprehensive Study by Type Market study with in-depth overview, describing about the Product / Industry Scope and elaborates market outlook and status to 2023. The market Study is segmented by key regions which is accelerating the marketization. At present, the market is developing its presence and some of the key players from the complete study are NVIDIA (United States) , Intel (United States) , IBM (United States) , Microsoft (United States) , Xilinx (United States) , Alphabet (United States) , Softbank (Japan) , Hanson Robotics (China) , Amazon (United States) and Blue Frog Robotics (France) etc.

Request Sample of Artificial Intelligence Robotics Comprehensive Study by Type (Service Robots, Industrial Robots, Industry Segmentation, Military & Defense, Law Enforcement, Healthcare Assistance, Education and Entertainment, Personal Assistance and Caregiving), Application (Public Relations, Stock Management, Others), Technology (Machine Learning, Computer Vision), Offering (Graphical Processing United (GPU), MIDI Processing United (MPU)) Players and Region – Global Market Outlook to 2023 .

Market Drivers
Increasing Applications of Robots for Personal Use including Entertainments and Companionship
Growing Automation and Robotics Infrastructure is Enhancing the Global Demand

Market Trend
Increasing Demand of Artificially Intelligent Robots in Household Applications
Introduction to Unmanned Assistance Systems in the Farming Industry and Proliferation of AI Enabled Drones

Opportunities
Concentration in Developing AI enabled Robots in Numerous Special Purpose Application where the robots can generate Maximum Return on Investment and Growing Awareness about Highly Automated Robots in Underdeveloped Countries

Competitive Analysis:
The key players are highly focusing innovation in production technologies to improve efficiency and shelf life. The best long-term growth opportunities for this sector can be captured by ensuring ongoing process improvements and financial flexibility to invest in the optimal strategies. Company profile section of players such as NVIDIA (United States) , Intel (United States) , IBM (United States) , Microsoft (United States) , Xilinx (United States) , Alphabet (United States) , Softbank (Japan) , Hanson Robotics (China) , Amazon (United States) and Blue Frog Robotics (France) includes its basic information like legal name, website, headquarters, its market position, historical background and top 5 closest competitors by Market capitalization / revenue along with contact information. Each player/ manufacturer revenue figures, growth rate and gross profit margin is provided in easy to understand tabular format for past 5 years and a separate section on recent development like mergers, acquisition or any new product/service launch etc.

Market Segments:
The Artificial Intelligence Robotics Comprehensive Study by Type Market has been divided into type, application, and region.
On The Basis Of Type: Service Robots , Industrial Robots , Industry Segmentation , Military & Defense , Law Enforcement , Healthcare Assistance , Education and Entertainment and Personal Assistance and Caregiving.
On The Basis Of Application: (Public Relations , Stock Management and Others
On The Basis Of Region, this report is segmented into following key geographies, with production, consumption, revenue (million USD), and market share, growth rate of Artificial Intelligence Robotics Comprehensive Study by Type in these regions, from 2013 to 2023 (forecast), covering
– North America (U.S. & Canada) {Market Revenue (USD Billion), Growth Analysis (%) and Opportunity Analysis}
– Latin America (Brazil, Mexico & Rest of Latin America) {Market Revenue (USD Billion), Growth Share (%) and Opportunity Analysis}
– Europe (The U.K., Germany, France, Italy, Spain, Poland, Sweden & RoE) {Market Revenue (USD Billion), Growth Share (%) and Opportunity Analysis}
– Asia-Pacific (China, India, Japan, Singapore, South Korea, Australia, New Zealand, Rest of Asia) {Market Revenue (USD Billion), Growth Share (%) and Opportunity Analysis}
– Middle East & Africa (GCC, South Africa, North Africa, RoMEA) {Market Revenue (USD Billion), Growth Share (%) and Opportunity Analysis}
– Rest of World {Market Revenue (USD Billion), Growth Analysis (%) and Opportunity Analysis}

Getting a grip on human-robot cooperation

The answer comes from the study entitled “On the choice of grasp type and location when handing over an object,” published in Science Robotics by a research team of The BioRobotics Institute of Scuola Superiore Sant’Anna and the Australian Centre for Robotic Vision. The study reveals the guiding principles that regulate the choice of grasp type during an exchange of objects, encouraging cooperation between a robotic system and a person.

The study, conducted in 2018, analysed the behavior of people when they have to grasp an object and when, instead of using it themselves, they need to hand it over to a partner. The researchers investigated the grasp choice and hand placement on those objects during a handover when subsequent tasks are performed by the receiver. Passers tend to grasp the purposive part of the objects and leave “handles” unobstructed to the receivers. Intuitively, this choice allows receivers to comfortably perform subsequent tasks with the objects.

“We realised that, to date, insufficient attention has been given to the way a robot grasps an object in studies on human-robot interaction,” explains Francesca Cini, PhD student of The BioRobotics Institute and one of the two principal authors of the paper. “This aspect is very pivotal in this field. For example, when we pass a screwdriver knowing that the receiver should use it, we leave the handle free to facilitate the grasp and the subsequent use of the object. The aim of our research is to transfer all these guiding principles onto a robotic system so that they will be used to select a correct grasp type and to facilitate the exchange of objects.”

The impact of the collaborative study opens new scenarios of technological innovation, bringing benefits to various social activities where human-robot cooperation is well-established and yet to be established. Indeed, it would be possible to ameliorate the production steps in an industrial context while, in rehabilitation, robots could assist patients with more natural and effective results.

“Collaborative Robotics is the next frontier of both industrial and assistive robotics,” says Marco Controzzi, researcher of The BioRobotics Institute and principal investigator of Human-Robot Interaction Lab. “For this reason, we need a new generation of robots designed to interact with humans in a natural way. These results will allow us to instruct the robot to manipulate objects as a human collaborator through the introduction of simple rules.””Perhaps surprisingly, grasping and manipulation are regarded as very intuitive and straightforward actions for us humans,” says Valerio Ortenzi, a Research Fellow at the Australian Centre for Robotic Vision and the other principal author of the paper. “However, they simply are not. We intended to shed a light on the behavior of humans while interacting in a common manipulation task and a handover is a perfect example where little adjustments are performed to best achieve the shared goal to safely pass an object from one person to the other.”

“Real-world manipulation remains one of the greatest challenges in robotics and we strive to be the world leader in the research field of visually-guided robotic manipulation,” says Australian Centre for Robotic Vision Director Peter Corke. “This research collaboration with Scuola Superiore Sant’Anna forms a vital partnership towards our goal of overcoming the last barrier to the ubiquitous deployment of truly useful robots into society. While most people don’t think about picking up and moving objects — something human brains have learned over time through repetition and routine — for robots, grasping and manipulation is subtle and elusive.”

The first walking robot that moves without GPS

Desert ants are extraordinary solitary navigators. Researchers were inspired by these ants as they designed AntBot, the first walking robot that can explore its environment randomly and go home automatically, without GPS or mapping. This work opens up new strategies for navigation in autonomous vehicles and robotics.

Human eyes are insensitive to polarized light and ultraviolet radiation, but that is not the case for ants, who use it to locate themselves in space. Cataglyphis desert ants in particular can cover several hundreds of meters in direct sunlight in the desert to find food, then return in a straight line to the nest, without getting lost. They cannot use pheromones: they come out when the temperature would burn the slightest drop. Their extraordinary navigation talent relies on two pieces of information: the heading measured using a sort of “celestial compass” to orient themselves using the sky’s polarized light, and the distance covered, measured by simply counting steps and incorporating the rate of movement relative to the sun measured optically by their eyes. Distance and heading are the two fundamental pieces of information that, once combined, allow them to return smoothly to the nest.

AntBot, the brand-new robot designed by CNRS and Aix-Marseille University (AMU) researchers at ISM, copies the desert ants’ exceptional navigation capacities. It is equipped with an optical compass used to determine its heading by means of polarized light, and by an optical movement sensor directed to the sun to measure the distance covered. Armed with this information, AntBot has been shown to be able, like the desert ants, to explore its environment and to return on its own to its base, with precision of up to 1 cm after having covered a total distance of 14 meters. Weighing only 2.3 kg, this robot has six feet for increased mobility, allowing it to move in complex environments, precisely where deploying wheeled robots and drones can be complicated (disaster areas, rugged terrain, exploration of extraterrestrial soils, etc.).

The optical compass* developed by the scientists is sensitive to the sky’s polarized ultraviolet radiation. Using this “celestial compass,” AntBot measures its heading with 0.4° precision by clear or cloudy weather. The navigation precision achieved with minimalist sensors proves that bio-inspired robotics has immense capacity for innovation. Here we have a trio of advances. A novel robot has been developed, new, innovative and unconventional optical sensors have been designed, and AntBot brings new understanding on how desert ants navigate, by testing several models that biologists have imagined to mimic this animal. Before exploring potential applications in aerial robotics or in the automobile industry, for example, progress must be made, for instance in how to operate this robot at night or over longer distances.

This work received support from the Direction Générale de l’Armement, CNRS, AMU, Provence-Alpes-Côte d’Azur region and from ANR under the Equipex/Robotex project.

This compass is composed of only two pixels topped by two polarized filters that turn to be equivalent to an optical sensor composed of two rows of 374 pixels. Turning the filters mechanically is an innovation that has reduced the sensor’s production cost quite considerably, from over 78,000€ to only a few hundred euros, within the constraints of the biomimetics.

Five ways technology is changing Learning and Development

Technology is fundamentally shaping the way organisations learn…

Thanks to the development of various e-learning technologies, online learning has become a reality. This makes carrying out continuing professional development (CPD) absolutely vital for L&D practitioners who want to keep their skills relevant.

We spoke to Watson Martin Partnership, a leading provider of professional qualifications, to find out five of the key ways technology is changing Learning and Development

  • The rise of mobile and tablet devices

The rise of portable devices has given us a lot: a way to browse social media and chat on-the-go; a platform to enjoy a range of apps; the opportunity to Google the symptoms of every illness we’ve ever had (OK, this one might not be a benefit).

But aside from offering a sense of convenience in our recreational lives, mobile and tablet devices have also provided the perfect place for interactive learning content.

As materials can now be accessed via a range of devices, learners are offered an increased level of autonomy, flexibility, and control. Not only can they choose when and where they study, they can also learn at their own pace.

These developments have additionally provided L&D practitioners with the foundations they need to better support individual learning processes.

  • The increase in ‘bringing your own device to work’

With an increasing number of employees opting to use their own devices not only at work, but also from home or when working remotely, organisations have come up with new ways to make universal access easier.

By creating learning apps and programmes, organisations are able to give individuals direct access to learning materials and resources from any device, which ultimately helps to maximise their workforce’s productivity.

L&D practitioners also use apps to communicate more effectively with delegates – whether it’s to track an event, register its attendance, or provide an online vault of learning materials.

  • The gamification of learning

Chances are, you’ve had your fair share of mild addictions to mobile games (see: Plants Vs. Zombies/ Candy Crush/Pokemon GO/all of the above).

With the wide variety of mobile and tablet games that are now available, it’s no surprise that the learning and development industry have utilised their popularity.

From gamified micro courses that help employees get to grips with procedures or software, to educational apps that teach through quizzes, videos, flashcards, and memory games – the gamification of learning materials provides a fast and easy way to digest information.

By linking games with learning activities, organisations can tap into people’s desires of socialising, being rewarded, and making choices – which consequently makes them more likely to engage with the content and apply what they’ve learnt practically.

  • The opportunity to interact virtually

Don’t understand a question? Working on a group project? Just need to vent? No problem.

Whether it’s to assist the communication of virtual teams, or to build an online learning community, the rise of social media and other communication tools have played a big part in supporting learners.

It offers the perfect way to connect groups of people who are getting to grips with the same topic – whether they’re taking an online or classroom course, carrying out an independent learning venture, or doing work-based learning and development.

By creating online groups or utilising hashtags, instructors, L&D professionals, and learners are able to share and access a range of information; from quizzes and questionnaires, to images and tips.

Because social media doesn’t need to be a distraction (as long as you use it right).

  • The assessment of progress

As a result of advancements in tech, tracking your development has never been easier.

In the past, feedback was only able to be given at the end of a learning intervention, consisting of a review – rather than an on-the-spot assessment. With new online tools, learners can benefit from a better monitoring system that tracks their learning as they go.

This means learners receive a more personalised, tailored and effective approach in their e-learning and online modules. And with more formative types of assessments, self-reflection and improvement is much easier.

Personalised feedback can also be provided if needed, which allows the learner to immediately identify and reflect on their learning.

Large-scale US wind power would cause warming that would take roughly a century to offset

All large-scale energy systems have environmental impacts, and the ability to compare the impacts of renewable energy sources is an important step in planning a future without coal or gas power. Extracting energy from the wind causes climatic impacts that are small compared to current projections of 21st century warming, but large compared to the effect of reducing US electricity emissions to zero with solar. Research publishing in the journal Joule on October 4 reports the most accurate modelling yet of how increasing wind power would affect climate, finding that large-scale wind power generation would warm the Continental United States 0.24 degrees Celsius because wind turbines redistribute heat in the atmosphere.

“Wind beats coal by any environmental measure, but that doesn’t mean that its impacts are negligible,” says senior author David Keith, an engineering and public policy professor at Harvard University. “We must quickly transition away from fossil fuels to stop carbon emissions. In doing so, we must make choices between various low-carbon technologies, all of which have some social and environmental impacts.”

“Wind turbines generate electricity but also alter the atmospheric flow,” says first author Lee Miller. “Those effects redistribute heat and moisture in the atmosphere, which impacts climate. We attempted to model these effects on a continental scale.”

To compare the impacts of wind and solar, Keith and Miller started by establishing a baseline for the 2012-2014 US climate using a standard weather forecasting model. Then they added in the effect on the atmosphere of covering one third of the Continental US with enough wind turbines to meet present-day US electricity demand. This is a relevant scenario if wind power plays a major role in decarbonizing the energy system in the latter half of this century. This scenario would warm the surface temperature of the Continental US by 0.24 degrees Celsius.

Their analysis focused on the comparison of climate impacts and benefits. They found that it would take about a century to offset that effect with wind-related reductions in greenhouse gas concentrations. This timescale was roughly independent of the specific choice of total wind power generation in their scenarios.

“The direct climate impacts of wind power are instant, while the benefits accumulate slowly,” says Keith. “If your perspective is the next 10 years, wind power actually has — in some respects — more climate impact than coal or gas. If your perspective is the next thousand years, then wind power is enormously cleaner than coal or gas.”

More than ten previous studies have now observed local warming caused by US wind farms. Keith and Miller compared their simulated warming to observations and found rough consistency between the observations and model.

They also compared wind power’s impacts with previous projections of solar power’s influence on the climate. They found that, for the same energy generation rate, solar power’s impacts would be about 10 times smaller than wind. But both sources of energy have their pros and cons.

“In terms of temperature difference per unit of energy generation, solar power has about 10 times less impact than wind,” says Miller. “But there are other considerations. For example, solar farms are dense, whereas the land between wind turbines can be co-utilized for agriculture.” The density of wind turbines and the time of day during which they operate can also influence the climatic impacts.

Keith and Miller’s simulations do not consider any impacts on global-scale meteorology, so it remains somewhat uncertain how such a deployment of wind power may affect the climate in other countries.

“The work should not be seen as a fundamental critique of wind power. Some of wind’s climate impacts may be beneficial. So rather, the work should be seen as a first step in getting more serious about assessing these impacts,” says Keith. “Our hope is that our study, combined with the recent direct observations, marks a turning point where wind power’s climatic impacts begin to receive serious consideration in strategic decisions about decarbonizing the energy system.”

Keith and Miller also have a related paper, “Observation-based solar and wind power capacity factors and power densities,” being published in Environmental Research Letters on October 4, which validates the generation rates per unit area simulated here using observations.

AI-based framework creates realistic textures in the virtual world

Many designers for the virtual world find it challenging to design efficiently believable complex textures or patterns on a large scale. Indeed, so-called “texture synthesis,” the design of accurate textures such as water ripples in a river, concrete walls, or patterns of leaves, remains a difficult task for artists. A plethora of non-stationary textures in the “real world” could be re-created in gaming or virtual worlds, but the existing techniques are tedious and time-consuming.

To address this challenge, a global team of computer scientists has developed a unique artificial intelligence-based technique that trains a network to learn to expand small textures into larger ones. The researchers’ data-driven method leverages an AI technique called generative adversarial networks (GANs) to train computers to expand textures from a sample patch into larger instances that best resemble the original sample.

“Our approach successfully deals with non-stationary textures without any high level or semantic description of the large-scale structure,” says Yang Zhou, lead author of the work and an assistant professor at Shenzhen University and Huazhong University of Science & Technology. “It can cope with very challenging textures, which, to our knowledge, no other existing method can handle. The results are realistic designs produced in high-resolution, efficiently, and at a much larger scale.”

The basic goal of example-based texture synthesis is to generate a texture, usually larger in size than the input, that closely captures the visual characteristics of the sample input — yet not entirely identical to it — and maintains a realistic appearance. Examples of non-stationary textures include textures with large-scale irregular structures, or ones that exhibit spatial variance in certain attributes such as color, local orientation, and local scale. In the paper, the researchers tested their method on such complex examples as peacock feathers and tree trunk ripples, which are seemingly endless in their repetitive patterns.

Their method involves training a generative network, called generator, to learn to expand (i.e., double the spatial extent of) an arbitrary texture block cropped from an exemplar, so that the expanded result is visually similar to a containing exemplar block of the appropriate size (two times larger). The visual similarity between the automatically expanded block and the actual containing block is assessed using a discriminative network (discriminator). As typical of GANs, the discriminator is trained in parallel to the generator to distinguish between actual large blocks from the exemplar and those produced by the generator.

Says Zhou, “Amazingly, we found that by using such a conceptually simple, self-supervised adversarial training strategy, the trained network works near-perfectly on a wide range of textures, including both stationary and highly non-stationary textures.”

The tool is meant to assist texture artists in video game design, virtual reality, and animation. Once the self-supervised adversarial training takes place for each given texture sample, the framework may be used to automatically generate extended textures, up to double the original sample size. Down the road, the researchers hope their system will be able to actually extract high-level information of textures in an unsupervised fashion.

Additionally, in future work, the team intends to train a “universal” model on a large-scale texture dataset, as well as increase user control. For texture artists, controlled synthesis with user interaction will likely be even more useful since artists tend to manipulate the textures for their own design.

Safe to use hands-free devices in the car

With hands-free technology, drivers can make calls and perform a variety of other tasks while still keeping their hands on the wheel and eyes on the road.

“Any activity that places either visual or manual demands on the driver — texting, browsing or dialing a hand-held phone, for instance — substantially increases crash risk. However, our recent study has found that the primarily cognitive secondary task of talking on a hands-free device does not appear to have any detrimental effects,” said Tom Dingus, director of VTTI and the principal investigator of the study.

The goal of the project was to determine the extent to which crash risk could be affected by primarily mental behaviors, known as cognitive distractions. Cognitive distractions occupy the mind but do not require the driver to look away from the road or remove his or her hands from the wheel. Examples include interacting with a passenger, singing in the car, talking on a hands-free cell phone, and dialing on a hands-free phone via voice-activated software.

Using video and other sensor data from the Second Strategic Highway Research Program naturalistic driving study, the largest light-vehicle study of its kind ever conducted, Dingus and the research team analyzed video footage of 3,454 drivers, 905 crashes (including 275 more serious crashes), and 19,732 control periods of “normal driving” for instances of cognitive distraction. For comparison, they also studied examples of drivers performing visual and manual activities, such as texting on a hand-held phone or adjusting the radio.

Drivers who used a hand-held phone increased their crash risk by 2 to 3.5 times compared to model drivers, defined as being alert, attentive, and sober. When a combination of cognitive secondary tasks was observed, the crash risk also went up, although not to nearly the same degree. In some cases, hands-free cell phone use was associated with a lower crash rate than the control group. None of the 275 more serious property damage and injury crashes analyzed were associated with the use of hands-free systems.

“There are a number of reasons why using a hands-free device could keep drivers more engaged and focused in certain situations,” said Dingus. “One is that the driver looks forward more during the conversation. Although engaging in the conversation could cause a small amount of delay in cognitive processing, the driver is still more likely be looking in the direction of a precipitating event, such as another car stopping or darting in front suddenly. The phone conversation could also serve as a countermeasure to fatigue on longer road trips. Perhaps most importantly, a driver who is talking on a hands-free phone is less likely to engage in manual texting/browsing/dialing and other much higher-risk behaviors.”

On Feb. 5, state lawmakers passed legislation that aims to make holding a cell phone while driving illegal.

“VTTI’s research has shown consistently that activities requiring a driver to take his or her eyes off of the forward roadway, such as texting or dialing on a handheld phone, pose the greatest risk. It is also important to note that in many newer cars, the driver can do some tasks hands-free using well-designed interfaces. Giving the driver an option to use a safer system will help with compliance for a new law and lead to fewer distraction-related crashes,” said Dingus.

Eight-hundred and forty-three people died on Virginia roads in 2017, according to the Virginia Department of Motor Vehicles. Of these, 208 fatalities and 14,656 injuries were attributed to distracted driving, an 18.2 percent increase from 2016. Texting/cell phone use was cited as one of the top three causes.

Artificial intelligence can identify microscopic marine organisms

Specifically, the AI program has proven capable of identifying six species of foraminifera, or forams — organisms that have been prevalent in Earth’s oceans for more than 100 million years.

Forams are protists, neither plant nor animal. When they die, they leave behind their tiny shells, most less than a millimeter wide. These shells give scientists insights into the characteristics of the oceans as they existed when the forams were alive. For example, different types of foram species thrive in different kinds of ocean environments, and chemical measurements can tell scientists about everything from the ocean’s chemistry to its temperature when the shell was being formed.

However, evaluating those foram shells and fossils is both tedious and time consuming. That’s why an interdisciplinary team of researchers, with expertise ranging from robotics to paleoceanography, is working to automate the process.

“At this point, the AI correctly identifies the forams about 80 percent of the time, which is better than most trained humans,” says Edgar Lobaton, an associate professor of electrical and computer engineering at North Carolina State University and co-author of a paper on the work.

“But this is only the proof of concept. We expect the system to improve over time, because machine learning means the program will get more accurate and more consistent with every iteration. We also plan to expand the AI’s purview, so that it can identify at least 35 species of forams, rather than the current six.”

The current system works by placing a foram under a microscope capable of taking photographs. An LED ring shines light onto the foram from 16 directions — one at a time — while taking an image of the foram with each change in light. These 16 images are combined to provide as much geometric information as possible about the foram’s shape. The AI then uses this information to identify the foram’s species.

The scanning and identification takes only seconds, and is already as fast — or faster — than the fastest human experts.

“Plus, the AI doesn’t get tired or bored,” Lobaton says. “This work demonstrates the successful first step toward building a robotic platform that will be able to identify, pick and sort forams automatically.”

Lobaton and his collaborators have received a grant from the National Science Foundation (NSF), starting in January 2019, to build the fully-functional robotic system.

“This work is important because oceans cover about 70 percent of Earth’s surface and play an enormous role in its climate,” says Tom Marchitto, an associate professor of geological sciences at the University of Colorado, Boulder, and corresponding author of the paper.

“Forams are ubiquitous in our oceans, and the chemistry of their shells records the physical and chemical characteristics of the waters that they grew in. These tiny organisms bear witness to past properties like temperature, salinity, acidity and nutrient concentrations. In turn we can use those properties to reconstruct ocean circulation and heat transport during past climate events.

“This matters because humanity is in the midst of an unintentional, global-scale climate ‘experiment’ due to our emission of greenhouse gases,” Marchitto says. “To predict the outcomes of that experiment we need a better understanding of how Earth’s climate behaves when its energy balance is altered. The new AI, and the robotic system it will enable, could significantly expedite our ability to learn more about the relationship between the climate and the oceans across vast time scales.”