Author Archives: Alun Williams

Freescale aims ABS chip at motorbikes

Freescale aims ABS chip at motorbikes

Motorcycles get their own anti-lock braking (ABS) chip with Freescale’s MC33SB0400 analogue chip, which includes solenoid power stages, but needs no heatsink.

Freescale aims ABS chip at motorbikes - SB0400 SB0401

“Introduction of the products coincides with the upcoming initial implementation of European Commission legislation making the fitment with ABS for all new motorcycles above 125cc to be mandatory beginning 1 January 2016,” said Freescale. “By incorporating these 7x7mm devices into their designs, manufacturers of ECUs [electronic control units] no longer must rely on larger ABS ICs developed for automobiles.”

Freescale SB0400 SB0401SB0400 works on both wheel, while for scooters and lighter motorcycles there is the one-wheel-only SB0401.

Made on Freescale’s Smartmos mixed signal process, the devices integrate: wheel speed sensor interface, valve drivers (four or two), motor pump driver, safety switch, watchdog and safety state machine; plus protection and diagnostics for: over-voltage, under-voltage, over-temperature, shorted load and open load. An internal charge pump allows n-channel external mosfets to be used to control the pump and isolate the solenoids.

Information interfaces include: vehicle speed output, warning lamp driver (14O), ISO K-line interface for connection to a plug-in diagnostic tester and 16bit SPI serial bus.

Solenoid drivers are rated at up to 5A (160mO switch) with PWM up to 5kHz. The pump motor pre-driver works up to 500Hz.

Freescale SB0400 SB0401 appThe firm’s functional safety programme (Freescale Safe Assure) is available to help system manufacturers with system compliance with functional safety standards ISO 26262 and IEC 61508. www.freescale.com/SafeAssure.

Both ICs are in volume production.

The MC33SB0400 demo board video is a good introduction and not just fluff. Access to the data sheet requires a log-in, although a cut-down version is freely available.

Alun Williams

A supercomputer named Helen

A supercomputer named Helen

It’s not often you get the chance to name a supercomputer. But the opportunity arose with a competition at Imperial College to christen their recently expanded supercomputer, part of the College’s High Performance Computing Service.

helen-supercomputerThere were more than one hundred entries from students, alumni and staff, with the winner being “Helen”, in recognition of Helen Kemp Porter who was the first female Professor at Imperial.

Professor James Stirling, Provost of Imperial, and Professor Peter Haynes, Academic Champion for High Performance Computing, judged the entries and the winning name was submitted by Kay Barrett, from the university’s Department of ICT (pictured, alongside Professor Stirling and Professor Haynes).

“I was looking at all the great Imperial researchers on our website and Professor Porter’s name really stood out,” said Barrett.

“She was a female pioneer and as a woman working in the technology sector, I thought it was important to recognise the contribution women have made to science throughout history. I am also really chuffed that I now have bragging rights.”

The university amplifies the biography of Professor Porter:

Professor Porter, who passed away in 1987, was a botanist, biologist and biochemist at Imperial and a Fellow of the Royal Society. She was one of the first British scientists to use chromatography, which is the collective term for a set of laboratory techniques that can be used to separate mixtures. She also pioneered the use of radioactive tracers, which can be used to explore the mechanisms of chemical reactions in experiments.

Being appointed the first female professor in 1959 was an important milestone in Porter’s career and a big leap forward for the male-dominated Imperial at the time. She had already been elected Fellow of the Royal Society three years earlier. She also held the role of Head of the Unit of Plant Physiology, which was based at Imperial and funded by the Agricultural Research Council, until her retirement from College in 1964. Professor Porter then became Second Secretary to the Agricultural Research Council, and in 1972, Adviser to the Secretary.

Read the full story »

See also: UK supercomputer gets upgrade

Alun Williams

Turning gold into spintronics

Magnetised gold is at the heart of new research into superconductivity for electronics, led by St Andrews.

Prof Steve Lee of the University of St Andrews

Prof Steve Lee of the University of St Andrews

The scientists apparently investigated what happens in a device where a very thin layer of a superconductor – carrying electrical current but without generating any heat – is sandwiched between a layer of a magnetic material and a layer of gold.

Under certain conditions, it seems, the layer of gold becomes magnetic due to charge carriers flowing out of the superconductor into the metal.

The ability to generate and manipulate magnetic currents in this way could have potential for applications in new types of electronic devices, says the universities.

“Superconductors are materials that, if cooled sufficiently, lose their resistance, that is, they carry electricity without dissipating heat,” said Dr Machiel Flokstra, of the School of Physics and Astronomy at St Andrews, who led the team of collaborators.

“This is possible because the electrons that carry the electrical charge bind together into pairs that are able to move without losing energy. Each electron is itself like a tiny bar magnet, since these charged electrons spin about their own axes.”

“When they form into superconducting pairs these electronic `spins’ align oppositely, so that the magnetic fields cancel out. It transpires that in these new devices these pairs of electrons can be separated into two currents moving in opposite directions, one with magnetic fields (spins) pointing up and one with them pointing down.”

“The idea of generating `spin currents’ is the basis of the emerging field of spintronics. In conventional electronics only electrical charges can be manipulated, but it is hoped in the field of spintronics that electron spins can also be controlled, leading to novel advanced electronic devices.”

The experiments also involved the University of Bath, the University of Leeds, Royal Holloway and Bedford College (University of London), the ISIS Facility and the Paul Scherrer Institute in Switzerland. The large team of collaborators are led by Professor Steve Lee of the University of St Andrews (pictured).

The research is published on the Nature Physics website.

Alun Williams

Conference & exhibition: Technology for building the IoT

IoT_nodate_webLondon’s first design conference and exhibition for the Internet of Things (IoT) will take place on 3 December 2015 at the Brewery Conference Suite.

The aim of this one-day conference is to present real-world information on designing hardware and software systems for the IoT.

Keynote speakers will include ARM, Imagination, French research organisation CEA-Leti and Bluetooth Special Interest Group.

The IoT is a concept with relevance for many markets – from consumer to automotive, healthcare to smart factories. Increasingly companies are developing innovative hardware systems and secure software which will connect battery-powered wireless devices to the internet in a secure way.

Papers from specialists will present topics such as: Designing in security right from the ground up; The implementation of a low power wireless interface; Selecting the best semiconductor of sensor technology for optimum power/performance.

Headline sponsor is Rohde & Schwarz and main sponsors are Avnet and Feabhas.

For more information on speakers/papers and to register for the conference go to: http://iotdesign.electronicsweekly.com/

Speakers include:

  • Gary Atkinson, Director of emerging technologies, ARM
  • Kevin McDermott, Director of strategic marketing, Imagination Technologies
  • Jean-Michel Goiran, Corporate business development for IoT, CEA Leti
  • Martin Woolley, Technical programme manager, Bluetooth SIG
  • Niall Cooling, Chief executive, Feabhas

Alun Williams

Bluetooth Europe 2015: Bluetooth Smart for IPv6 in the IoT

Internet of Things - IoT

Internet of Things – IoT

The Bluetooth Europe 2015 conference running in London today and yesterday, included a keynote from Nordic Semiconductor considering one possible realisation of the future dream of the Internet of Things (IoT), involving IPv6.

Entitled ‘Bluetooth Smart Sensor Networks over IPv6: Powering the IoT Market’ it envisaged Bluetooth Smart serving as the data-link layer for IPv6.

Thomas Bonnerud, Director of Product Marketing at Nordic Semiconductor, outlined possible benefits of combining IPv6 with the low-cost, low-power wireless technology of Bluetooth Smart. That Bluetooth Smart, the low-energy version of Bluetooth common to everyone’s smartphone, will serve as the data link IPv6 running on top, as is already the common case with IPv4 and Wi-Fi, for example.

The why

Currently we in the “Internet of my things”, he said. We are connecting ‘my things’ with Bluetooth Smart. Mainly “my things” and mainly connecting by phone. At this point there is no need for IP.

He also noted that the fastest growing apps in Bluetooth Smart today are those that connect to cloud services.

But the whole IoT concept depends on the adoption and rollout of IPv6, he argued, and thus we have to assume IPv6 is going to happen albeit with a gradual phasing. He then talked about future requirements, giving an example of maintenance of equipment and machinery, such as the health status of freezer cabinets in a large supermarket or warehouse…

“These possible ‘Things around us’ are the next frontier for Bluetooth Smart. Non-personal, not tied to my phone, and with 24/7 connectivity potentially required. For example, a door lock.”

“Maybe the devices are part of a larger distributed network. There will be a desire to leverage existing infrastructure.”

This is where IPv6 over Bluetooth Smart provides a solution, he said

Bluetooth Smart routers, not gateways

He anticipated the increasing use of Bluetooth Smart routers, rather than gateways, as basic building blocks for larger networks. In other words multi-mode Wi-Fi, Ethernet and other protocol devices. He said they were “coming along now” this year, citing technologies such as BlueZ, the Linux-based Bluetooth stack, and OpenWrt, the Linux-based wireless router. He also flagged “6LoWPAN”, an acronym of IPv6 over Low power Wireless Personal Area Networks.

The barrier for adoption is alot lower than other technologies, he said.

Basically, “use proven IP technology, leveraging existing infrastructure” – in this case Bluetooth Smart. Things to services, and things to things, either in the local network or remotely.”

Note that the Nordic nRF51 IoT Software Development Kit, released at the end of last year, covers this area, extending IP addressing all the way to the ‘things’ in IoT. According to the company:

The first release of the protocol stack includes: Internet Protocol Support Profile (IPSP), 6LoWPAN adaption layer, IPv6 internet routing layer, User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) transport layers, Constrained Application Protocol (CoAP), and Message Queuing Telemetry Transport (MQTT) application layers, plus a range of application examples. A compact memory footprint also means that the complete protocol stack can run on the nRF51 Series SoC in a single-chip configuration, enabling developers to minimize power, size, and cost of end products.

Common IP upper layers

This approach enables heterogeneous networks, using common IP upper layers, he said, removing the need for smart gateways. “It’s all IP on the top – an IP-based application layer, providing interoperability across the networks”.

“We need to align with the Internet. Why should IoT be any different? Why shouldn’t the “things” behave exactly like (Internet-based) servers today? It’s a proven solution: scalable, predicable and robust.

“It’s about not re-inventing the wheel.”

For security, he further cited DTLS/TLS (Datagram Transport Layer Security) for providing secure end-to-end comms – a way for applications to communicate which prevents eavesdropping or tampering. That the means of securing remote banking transactions could be used with Bluetooth Smart.

“It requires”, he admitted, “multiple vendors to support the concept.

Of course, it was pointed out that IPv6 is one of those technologies forever on the cusp of realisation, forever the bridesmaid in terms of real-world implementation.

His answer was a challenge to the whole premise of IoT, and the realisation of the tech industry’s latest golden horizon: maybe IPv6 has to happen if IoT is going to be real, with millions of extra, directly-addressable nodes; you can’t have an Internet of Things without it.

What are your thoughts? Leave a comment below.

For more on the Nordic nRF51 Series Bluetooth Smart SoCs and IPv6 see ‘Nordic Semiconductor IPv6 over Bluetooth Smart protocol stack for nRF51 Series SoCs enables small, low cost, ultra-low power Internet of Things applications

The Bluetooth Europe 2015 conference is organised by the Bluetooth SIG.

IPv6
Internet Protocol version 6 is the most recent version of the Internet Protocol (IP), providing an identification system for computers across the Internet.Using an 128-bit address system (eight groups of four hexadecimal digits) it crucially allows 2128 addresses. This is far more than the current, 32-bit address IPv4 system, which has faced address exhaustion.

Unfortunately IPv4 and IPv6 are not interoperable, which has slowed the latter’s introduction.

Alun Williams

Automotive algorithms anticipate our driving quirks

Automotive algorithms anticipate our driving quirks

Automotive algorithms anticipate our driving quirks

At the University of California, Berkeley, engineers are preparing autonomous cars to predict what we impulsive, unreliable humans might do next. A team led by Katherine Driggs-Campbell has developed an algorithm that can guess with up to 92 per cent accuracy whether a human driver will make a lane change.

She is due to present the work next month at the Intelligent Transportation Systems conference in Las Palmas de Gran Canaria, Spain.

Enthusiasts are excited that self-driving vehicles could lead to fewer crashes and less traffic. But people aren’t accustomed to driving alongside machines, says Driggs-Campbell. When we drive, we watch for little signs from other cars to indicate whether they might turn or change lanes or slow down. A robot might not have any of the same tics, and that could throw us off.

“There’s going to be a transition phase,” she says. “How do you ensure the autonomous vehicle is clearly communicating with the humans, and how do you know the human is understanding what they’re doing?”

Past algorithms have tried to predict what a human driver will do next by keeping tabs on body movements. If someone seems to be looking over their shoulder a lot, say, that might be a sign that they’re thinking of moving lane.

Driggs-Campbell and her colleagues wanted to see if they could forecast a driver’s actions by monitoring only outside the car.

To see how human drivers do this, they asked volunteers to drive in a simulator. Each time the driver decided to make a lane change, they pushed a button on the steering wheel before doing so. The researchers could then analyse data from the simulator for patterns at the time of lane changes: Where were all of the cars on the road? How fast was each one going, and had it recently moved or slowed down? Was there sufficient room next to the drivers’ car?

They used some of the data to train the algorithm, then put the computer behind the wheel in re-runs of the simulations. The algorithm could predict accurately when the driver would attempt a lane change.

Such algorithms would help a self-driving car make smarter decisions in the moment. They could also be used to teach the cars to mimic human driving tics, says Driggs-Campbell.

It’s good work, but teaching a car to understand others is only the beginning, says Raúl Rojas at the Free University of Berlin in Germany. “Humans are very creative about breaking the rules,” Rojas says. “Computers are programmed to never break the rules.”

Syndicated content: Aviva Rutkin – New Scientist

 

Alun Williams

Military upgrades march in step with modular design

Anthony Green describes the processes a manufacturer needs to implement to ensure mil-spec reliability for electronic control circuits.

Military upgrades march in step with modular design

Military upgrades march in step with modular design

The pace of development of electronics in the military and aerospace market brings its own challenges for the manufacturer when undertaking major upgrades to legacy products that already meet regulatory standards.

This case study describes a defence electronics firm with a strength in airborne mission-critical systems, that required two separate electronic control modules for a new AESA radar system to be developed and manufactured.

The main control interface would enable communication with the aircraft’s pilot‑controlled electronics. The front‑end controller module was to emit electrical timing reference signals and manage the clock distribution for the antenna system within the radar. Both were part of an upgrade that would replace an older, more mechanical radar system that was proving unreliable.

Reliability

The customer had designated design for reliability as a top priority. The modules had to be suitable for use in military jet fighters where space and weight is a major consideration; to be able to cope with extremes of temperature, vibration and sudden shock from gunfire; and they had to be compact, lightweight, and electromagnetically shielded to minimise noise amid radar signals.

Once integrated into the avionics of the plane, the boards would deliver high-resolution radar at medium and long range. To achieve the maximum reliability possible the manufacturing design team reviewed and introduced a range of techniques that included careful selection of components and consideration of thermal expansion properties and tin whisker growth.

Careful PCB layout in relation to adjacent metal materials and the addition of conformal coating further mitigated the risk of tin whisker growth.

The design was developed to meet the de-rating guidelines submitted by the customer and the component specifications for voltage, power, frequency and thermal properties. Lightweight, reliable thermal relief techniques were employed that included selective conduction cooling to a cool wall. Potential hot spots were identified by thermal analysis, which lead to revisions to the PCB layout.

Standards

After an assessment of the mean time between failures (MTBF) the team made sure the product would reach the military standard MIL-HDBK-217B. Putting the antennas through failure mode and effects analysis (FMEA) defined which areas were potential reliability risks and lead to a re-design of the protective circuits to defend against threats such as lightning strikes.

By using signal integrity optimisation of critical high-speed signals and then slowing edges of certain signals, the team reduced the potential noise in the radar antenna, which improved reliability of the critical reference clock distribution.

Moving from the design stage, the modules were then manufactured to an IPC Class 3 standard using a tin-lead solder process; conformal coating; edge sealing and corner bonding of ball grid arrays (BGAs); and re-balling and hot solder dipping of components for tin whisker growth mitigation.
The final stage of the process was environmental stress screening, which was carried out by the customer before the modules were moved into inflight trialling. Both the electronic control modules are now in operation.

The manufacturing design team had to work closely with customers and, at times, stray from the script in commoditising the product. They had to look at how the legacy products had worked and re-configure the design to meet 21stC military certification, while looking to the potential for future upgrade of the components, including producing accurate documentation at every stage in the development and production process. Compliance with all regulations had to be achievable and all documentation has to tell a coherent story of equipment’s development and testing.

Co-operation between military and aerospace manufacturers and design engineers can bring products to market on time, on budget, with regulations met.

Anthony Green is director of engineering, EMEA at Plexus

Defence & Security Equipment International (DSEI) 2015 takes place at ExCeL, London, 15-18 September. The companies, systems suppliers and specialist electronics firms attending reflect the importance of electronics to military systems.

Alun Williams

D-Wave claims massive quantum computing boost

D-Wave 2X quantum computing

D-Wave 2X quantum computing

D-Wave Systems in British Columbia, Canada, is the only company in the world selling quantum computers, and it counts Google and NASA among its customers.

But after four years on the market there is still no clear evidence its machines can solve problems faster than ordinary computers.

Now the firm has announced the D-Wave 2X, and claims it is up to 15 times faster than regular PCs. However, outside experts contacted by New Scientist say the test is not a fair comparison.

The theory behind such computers, which exploit the weird properties of quantum mechanics, is sound. A device built using qubits, which can be both a 0 and a 1 at the same time, promises to vastly outperform regular binary bits for certain problems, like searching a database.

But putting that theory into practice has proved tricky, and though experiments show the D-Wave machines display quantum behaviour, it’s not clear this is responsible for speeding up computation.

The D-Wave 2X is the company’s third computer to go on sale, and features more than 1000 qubits – double the previous model. Other changes have reduced noise and increased performance, says D-Wave’s Colin Williams.

D-Wave put the machine through its paces with a series of benchmark tests based on solving random optimisation problems.

600 times faster

For example, imagine a squad of football players, all with different abilities and who work better or worse in different pairs. One of the problems is essentially equivalent to picking the best team based on these constraints.

D-Wave compared the 2X’s results against specialised optimisation software running on an ordinary PC, and found its machine found an answer between two and 15 times as quickly. And if you leave aside the time it takes to enter the problem and read out the answer, the pure computation time was eight to 600 times faster.

“This is exciting news, because these solvers have been highly optimised to compete head-to-head with D-Wave’s machines,” says Williams. “On the last chip we were head-to-head, but on this chip we’re pulling away from them quite significantly.”

An important wrinkle is that finding the absolute best solution is much more difficult than finding a pretty good one, so D-Wave gave its machine 20 microseconds calculation time before reading out the answer. The regular computers then had to find a solution of equivalent quality, however long that took.

This makes it less of a fair fight, says Matthias Troyer of ETH Zurich in Switzerland, who has worked on software designed to enable regular computers to compete with D-Wave. A true comparison should measure the time taken to reach the best answer, he argues. “My initial impression is that they looked to design a benchmark on which their machine has the best chance of succeeding,” he says.

It’s a bit like a race between a marathon runner and a sprinter, in which the sprinter goes first and sets the end point when she gets tired. The marathon runner will struggle to replicate her short-range performance, but would win overall if the race were longer. “Whether the race they set up is useful for anything is not clear,” says Troyer.

But Williams says D-Wave’s customers aren’t interested in the absolute best solutions – they just want good answers, fast. “This is a much more realistic metric.”

Fair comparison?

Questions have also been raised about the PC used in the tests. D-Wave used a single core on an Intel Xeon E5-2670 processor, but that chip has eight such cores, and most PCs have at least four. Multiple cores allow a processor to split up computation and get results faster, so D-Wave’s numbers should come down when compared with a fully utilised chip, says Troyer.

Communication between cores introduces some slowdown, so doubling the number of cores doesn’t double performance, says Williams. Even assuming zero slowdown, you’d need a massive computer to tackle the largest problems, he says. “You’d need 600 classical cores to match us at that scale.”

Other computing hardware might be better suited to a competition with D-Wave, says Umesh Vazirani of the University of California, Berkeley – graphical processing units (GPUs) are often used for large-scale parallel computation.

“The proper comparison would be to run simulations on GPUs, and in the absence of such simulations it is hard to see why a claim of speed-up holds water,” Vazirani says.

Williams says D-Wave is planning to publish GPU benchmarks in future.

In the end, the only thing that will prove D-Wave’s machines really are working quantum computers is a runaway performance boost on larger and larger problems, known as “quantum speed-up”. D-Wave explicitly says it is not claiming such a speed-up with these tests – a good sign, says Troyer.

A previous test in 2013 claimed a 3600-fold performance increase but was later discredited and D-Wave took the criticisms on board. “I think they are getting much more serious in the statements they make,” Troyer says.

Syndicated Content: Jacob Aron, New Scientist

 

Alun Williams

D-Wave claims massive quantum computing boost

D-Wave 2X quantum computing

D-Wave 2X quantum computing

D-Wave Systems in British Columbia, Canada, is the only company in the world selling quantum computers, and it counts Google and NASA among its customers.

But after four years on the market there is still no clear evidence its machines can solve problems faster than ordinary computers.

Now the firm has announced the D-Wave 2X, and claims it is up to 15 times faster than regular PCs. However, outside experts contacted by New Scientist say the test is not a fair comparison.

The theory behind such computers, which exploit the weird properties of quantum mechanics, is sound. A device built using qubits, which can be both a 0 and a 1 at the same time, promises to vastly outperform regular binary bits for certain problems, like searching a database.

But putting that theory into practice has proved tricky, and though experiments show the D-Wave machines display quantum behaviour, it’s not clear this is responsible for speeding up computation.

The D-Wave 2X is the company’s third computer to go on sale, and features more than 1000 qubits – double the previous model. Other changes have reduced noise and increased performance, says D-Wave’s Colin Williams.

D-Wave put the machine through its paces with a series of benchmark tests based on solving random optimisation problems.

600 times faster

For example, imagine a squad of football players, all with different abilities and who work better or worse in different pairs. One of the problems is essentially equivalent to picking the best team based on these constraints.

D-Wave compared the 2X’s results against specialised optimisation software running on an ordinary PC, and found its machine found an answer between two and 15 times as quickly. And if you leave aside the time it takes to enter the problem and read out the answer, the pure computation time was eight to 600 times faster.

“This is exciting news, because these solvers have been highly optimised to compete head-to-head with D-Wave’s machines,” says Williams. “On the last chip we were head-to-head, but on this chip we’re pulling away from them quite significantly.”

An important wrinkle is that finding the absolute best solution is much more difficult than finding a pretty good one, so D-Wave gave its machine 20 microseconds calculation time before reading out the answer. The regular computers then had to find a solution of equivalent quality, however long that took.

This makes it less of a fair fight, says Matthias Troyer of ETH Zurich in Switzerland, who has worked on software designed to enable regular computers to compete with D-Wave. A true comparison should measure the time taken to reach the best answer, he argues. “My initial impression is that they looked to design a benchmark on which their machine has the best chance of succeeding,” he says.

It’s a bit like a race between a marathon runner and a sprinter, in which the sprinter goes first and sets the end point when she gets tired. The marathon runner will struggle to replicate her short-range performance, but would win overall if the race were longer. “Whether the race they set up is useful for anything is not clear,” says Troyer.

But Williams says D-Wave’s customers aren’t interested in the absolute best solutions – they just want good answers, fast. “This is a much more realistic metric.”

Fair comparison?

Questions have also been raised about the PC used in the tests. D-Wave used a single core on an Intel Xeon E5-2670 processor, but that chip has eight such cores, and most PCs have at least four. Multiple cores allow a processor to split up computation and get results faster, so D-Wave’s numbers should come down when compared with a fully utilised chip, says Troyer.

Communication between cores introduces some slowdown, so doubling the number of cores doesn’t double performance, says Williams. Even assuming zero slowdown, you’d need a massive computer to tackle the largest problems, he says. “You’d need 600 classical cores to match us at that scale.”

Other computing hardware might be better suited to a competition with D-Wave, says Umesh Vazirani of the University of California, Berkeley – graphical processing units (GPUs) are often used for large-scale parallel computation.

“The proper comparison would be to run simulations on GPUs, and in the absence of such simulations it is hard to see why a claim of speed-up holds water,” Vazirani says.

Williams says D-Wave is planning to publish GPU benchmarks in future.

In the end, the only thing that will prove D-Wave’s machines really are working quantum computers is a runaway performance boost on larger and larger problems, known as “quantum speed-up”. D-Wave explicitly says it is not claiming such a speed-up with these tests – a good sign, says Troyer.

A previous test in 2013 claimed a 3600-fold performance increase but was later discredited and D-Wave took the criticisms on board. “I think they are getting much more serious in the statements they make,” Troyer says.

Syndicated Content: Jacob Aron, New Scientist

 

Alun Williams