Showing posts with label Extras. Show all posts
Showing posts with label Extras. Show all posts

Monday 30 June 2014

Worst Spots for Weather Extremes Found

Frightful floods, freezes and heat waves favor certain parts of the Northern Hemisphere, the result of strong atmospheric currents that steer extreme weather to the same places over and over again, a new study finds.
Fear a cold winter? Then avoid eastern North America. Hate floods? Stay out of western Asia. Enjoy a long shower? Then drought-prone central North America, Europe and central Asia aren't for you. Can't stand the heat? Rule out heat-wave-prone western North America and central Asia, according to findings published today (June 22) in the journal Nature Climate Change.
The atmospheric currents that control the bad weather are similar to a sky river: They swoop back and forth across the hemisphere at about 3 miles (5 kilometers) above the surface, with giant waves that resemble the Mississippi River's wide bends. The currents also have vertical pressure waves that vary like a riverbed that shallows and deepens — these contribute to the pressure highs and lows in daily weather reports

jet stream

2014 Beach Report: 'Superstar' Seashores & Worst 'Repeat Offenders'

Surfer on the BeachThe report found that many beaches across the country have polluted waters caused by stormwater runoff, overflowing sewage and other forms of bacterial contamination. The study was based on water samples collected last year from nearly 3,500 coastal and Great Lakes beaches. 


A newly released report that graded the water quality of beaches across the United States found that 10 percent of the country's shores are contaminated or polluted.
The annual beach report, conducted by the Natural Resources Defense Council (NRDC), discovered that one in 10 beaches in the coastal and Great Lakes region of the United States did not meet the Environmental Protection Agency's recommended standards for safe swimming.

Future Soldiers May Wear Bulletproof Spider Silk

Ultra-strong spider silk, one of the toughest known natural fibers,  could one day protect soldiers on the battlefield from bullets and other threats, one company says.
Spider silk is light and flexible, and is stronger by weight than high-grade steel. Its potential applications span a wide range of industries, from surgical sutures for doctors to protective wear for the military. But producing and harvesting enough spider silk to make these types of products commercially available has posed a challenge.
Kraig Biocraft Laboratories, based in Lansing, Michigan, genetically engineered silkworms to produce spider silk, and has used the material to create gloves that will soon undergo strength testing.
Spider Webfu

In Photos: Stunning Views of Titan from Cassini























In Photos: Early Bronze Age Chariot Burial




Friday 27 June 2014

Top Security Best Practices For Mobile

mobile security small biz
A feel-good technology experience can quickly turn into a gruesome horror story the moment security is compromised. This is why BYOD concerns, data breaches and mobile security fears are top-of-mind issues for many business stakeholders these days. To help ease your mind, here are a few straightforward tips to keep your mobilesmall business technology safe at home and on the road.

Require screen locks and passcodes on mobile devices

Whether issued by the company or furnished by the employee, if it’s used for work, all mobile devices should have a screen lock and passcode requirement as a first line of defense.

Use the Find My Phone app

Want to see a normally calm adult lose it? Hide their phone from them. Few things wrack our nerves these days like a missing phone. The Find My Phone app can help you quickly locate and retrieve your mobile command center should it go astray. Use this service ASAP as it no longer works if the phone is powered off.

Remote lock and wipe

This essential feature is self-explanatory as it gives you the ability to lock access to unknown users from a remote location and wipes the mobile device clean of all existing data. Some users are hesitant to wipe the phone of data immediately in the event they find it, which brings up another great point below…

Back up your data consistently

If you back up regularly, you can wipe confidential data and stave off major breaches while maintaining confidence that you’ll be able to restore data if and when the device is recovered. Wiping data should also be an immediate security precaution, so be sure to remove lost data concerns by including regular backups in your small business technology security schedule. Storage solutions like Dropbox offer a simple way to save your information to the cloud for easy access from any web-enabled device.

Disable the auto-fill option

Sure, auto fill is convenient, but if your device ends up in the wrong hands you’re essentially granting access to whatever the new device “owner” wants to see. Instead, you’re better off using…

Single sign on

Single sign on (SSO) solutions like Ping Identity allow for automated user management for Concur and all your apps while ensuring everyone’s identity information is safely tucked away behind your firewall where it belongs. SSO also saves time. IT can manage access and user provisioning all from one simple SaaS based management console and end users have one-click access to all cloud applications and other small business technology whether at the office or on the go.

Concur’s app developer perspective

Way too many developers think about security after the fact. At Concur, we plan and implement security specifications as we are building the app. In fact, our developers work in partnership with our security team members. With Concur, you can rest assured that your personal and private user info is NEVER stored on your device. Instead, your information is encrypted using AES, the same encryption standard used by highly secure military and financial apps

iFind: World’s first battery-free Bluetooth location tag raises $500,000, despite all the hallmarks of being a giant scam

iFind Bluetooth tracking tag, on a cat
In what is best described as a slow-motion bank robbery or train wreck (or both combined), it appears that a Kickstarter scam is about to walk away with over $500,000. The iFind, developed by WeTag, purports to be a battery-free Bluetooth location tracking tag, which can be stuck to any and all of your valuables (your wallet, your phone, your cat). WeTag says it has developed some magical, patent-pending technology that allows the iFind to harvest enough power from the air to operate the Bluetooth beacon forever, without a backup battery. Furthermore, along with the iFind iOS and Android tracker apps, each tag also has an integrated “loud alarm” to help you track it down. Sadly, a bit like the Solar Roadways project, iFind sounds too good to be true, and 10,000 unfortunate backers are probably about to be conned out of $500,000.
On paper, the iFind sounds absolutely amazing. I mean, come on, it’s the size of a quarter, harvests free energy from the air, and it has a detection range of 200 feet (60m). If you leave the 200-foot range, the “UHelp” feature extends the catchment area to all of your friends’ phones, too. Instead of a battery that you need to replace (like last year’s StickNFind location tags), the iFind has a “unique power bank” that allows it to be powered purely via harvested electromagnetic radiation (radio waves). Each iFind even has a built-in accelerometer, allowing you to trigger an alarm on your phone by shaking one of the tags (in case you misplace your phone). Is your mind blown yet?

Thursday 26 June 2014

From kickoff to the final goal, Google is your guide to the beautiful game

From the last minute U.S. goal against Algeria in ‘10 to the headbutt watched ‘round the world in ‘06, every four years the beautiful game captures the imagination of billions of people. This year, wherever you are, Google is bringing you closer to the action than ever before.

Don’t miss a minute
For the first time, a simple search for [world cup] or [world cup usa] will give you team lineups before the match, live scores, and even up to the minute information about goals and player stats. 


You can also stay updated on your favorite teams with Google Now—you don’t even have to search. Learn more on Inside Search.


What does the world want to know during the tournament?
Google Trends is your real-time guide to the players, teams and moments that are capturing the world’s attention. At google.com/worldcup you can explore these moments throughout the tournament, whether it’s insight on how a country is feeling ahead of a big match, or where fans stand on a controversial game-winning call. 

Take in the stadiums and streets with Street View
With Street View in Google Maps, you can explore the sights and culture of this year’s tournament, from the 12 stadiums to the iconic painted streets, one of Brazil’s tournament traditions.


As the world unites under a common love for a single sport, there's sure to be a lot of action. From dramatic tumbles to magisterial strikes, and from contested headers to flops and flags, we'll be there to help you discover and connect with the moments that matter most.

Sunday 15 June 2014

Mind-controlled exoskeleton prepares to kick off the 2014 World Cup

World Cup
It’s showtime for the 2014 World Cup. On Thursday, one of the eight partially paralyzed men and women who have been training with Miguel Nicolelis’ robotic, mind-controlledexoskeletons will stand up from their wheelchair and make the opening kickoff. The technology used to do this springs from a close collaboration between Nicolelis the neuroscientist, and electrical engineer Gordon Cheng. Together they lead the Walk Again Project which now promises to make history.
The collaboration has its origins in a gimmicky demonstration from 2008. Signals recorded from one of Nicolelis’ lab monkeys at Duke were used to control one of Cheng’s humanoid robots in Kyoto over the internet. The demonstrations spawned a number of copycat experiments where so called brain-to-brain interfaces were set up between different animals, and even humans. Since then Nicolelis has made one notable advance after another towards building workable brain-computer interfaces (BCIs) and understanding how the brain will adapt to them.
Iron-Man-Suit
In particular, some of his recent milestones include: Simultaneously recording from over 1000 single neurons (potentially up to 2000) with multi-electrode arrays; giving rats the ability to feel infrared light; advancing our knowledge of how the somatosensory cortex of primates incorporates virtual avatar arms into the existing body schema; and even more esoteric stuff like making fundamental insights into the computability of the brain.
The above experiments may sound slightly removed from the harsh realities of spinal injury, but they are anything but. Last Friday, swimmer and six-time Olympic gold medalistAmy Van Dyken severed her spine in an ATV accident in Arizona. The injury, at the level of the 11th vertebrae in the thoracic spine, is precisely the kind trauma the potential wearers of the new exoskeleton suit all have. For this iteration of the suit there will be no internal 1000-electrode arrays, only surface-recorded EEG signals. But with some of the other advanced tech that the suit has, that’s all that will be needed for the task at hand.
Cellularskin
One crucial part of the suit is an artificial skin recently developed by Cheng (now at the Technical University of Munich). Known as “CellulARSkin” this smart material has integrated sensors and actuators to provide feedback about the suit and its movements to the wearer. The elemental components of CellulARSkin are hexagonal electronics packages that include a microprocessor and sensors for touch proximity, pressure, vibration, temperature and three-dimensional motion. The signals generated by the sensors are fed back to a network of tiny motors embedded in the material and wrapped around various parts of the body — places like the arms that can still feel and therefore sense the motors.
The suit itself is reminiscent of the Sarcos exoskeleton developed by Ratheon. While it does not need to have the full hydraulic power capabilities of the Sarcos suit, certain Iron Man type features may be desirable. In fact the suits to be worn by the police who will be on patrol at the World Cup event are not too shabby themselves. As shown in the picture below, they might even give RoboCop a run for his money.
Robo
There is certainly a lot more to be said here about this technology, but it may be wise to wait until tomorrow and let the suit itself do the talking — and then we can make our comments.

How does a lithium-ion battery work, and why are they so popular?

lion batts head
New research from MIT has been making the rounds this week, and while its core insight might seem weak, that very fact highlights just how quickly technology really does move these days. While lithium-ionbatteries (LIBs) are all over the world, the truth is we still don’t really know how they work. In particular, as scientists try out more and better new materials for electrodes, each one brings slight variations in function and performance. One of the most promising electrode materials is lithium-iron phosphate, and now researchers have a much better understanding of exactly how it charges and discharges — which should hopefully guide the way to improving those processes.

How does a lithium-ion battery work?

First, we need to look at how a lithium-ion battery works in general. Like any other battery, its basic design sees an electrolyte (the “transport medium”) ferrying lithium ions back and forth between the negative electrode and the positive electrode. In a totally discharged batteries, our mobile lithium ions will be entirely connected to the positive electrode – their chemical properties keep them bound to the positive electrode material while they lack electrons. If we give them electrons by pumping electricity into the system (recharging), they will naturally dissociate from the positive electrode and migrate back to the negative electrode. Once they’re all lined up on the other side, loaded with nice high-energy electrons, we call the battery “charged.”
This stable state breaks down when we provide an avenue for the electrons now trapped at the negative electrode to travel down their charge gradient to the positive side of the battery — this takes away electrons from lithium in the negative electrode and makes them again Li+, causing them to naturally migrate all the way back. We can use that negative-to-positive electron flow to power everything from pacemakers to electric cars, and it all ultimately comes down to the back-and-forth movements of ions. Incidentally, it’s only recently that scientists have discovered exactly why too many back-and-forth reactions cause a battery to slowly die.

Why lithium-ion batteries are popular

The main reason you’ve heard the term “lithium-ion battery” before is energy density; a LIB setup can pack a lot of power into a very small space. More than that, “Li-on” batteries offer decent charge times and a high number of discharge cycles before they die. If you use a pure lithium metal at the electrodes, you’ll get much higher energy storage, but no ability to recharge — depending on your choices for electrodes, you can powerfully affect your battery’s performance. Among other things, energy density is related to the number of lithium ions (and thus electrons) the electrodes can hold per unit of surface area.
This diagram shows how the Solid Solution Zone lines up next to charged and discharged areas of the electrode.
This diagram shows how the Solid Solution Zone lines up next to charged and discharged areas of the electrode.
This MIT study [doi: 10.1021/nl501415b - "In Situ Observation of Random Solid Solution Zone in LiFePO4 Electrode] specifically looked at a cathode material lithium-iron phosphate. These lithium-iron phosphate batteries show promise for everything from electric cars (likely) to storage of grid power (less likely), but when it was originally introduced, LiFePO4 showed little promise for battery tech. In its pure form, lithium-iron phosphate shows poor electrical abilities — but crush it up into nanoparticles and coat it with carbon and it seems the story changes quite a bit. The incredible jump in ability when turned into nanoparticles is described as a major surprise for battery researchers, and a major win for nanoscience.
The main reason for excitement over the new nano-cathode, beyond its impressive-but-not-amazing storage and discharge abilities, is that it discharges at a totally uniform voltage. This means batteries needn’t incorporate devices to regulate that voltage, which could make them cheaper and smaller, and it also allows them to discharge at full voltage until totally empty. It does this, we now know, by creating a zone called a Solid Solution Zone (SSZ), a buffer area of low lithium density that seems to soften the harsh boundary between charged (LiFePO4) and discharged (FePO4) portions of the electrode during use. This seems to be behind the material’s amazing abilities, and pumping up this SSZ through design could extend make lithium-ion tech last even longer.
Technology does seem to be coming for this aging battery standard, however, and it will need some major upgrades to stay with the times. It’s getting them, with huge design upgrades that hold a lot of promise. Still, everything from improved capacitors to super-batteries based on cotton could supplant lithium as the king of energy storage — we may find that improvements in our understanding of conventional batteries are simply too little too late.

D-Wave confirmed as the first real quantum computer by new research


D-Wave 2


Ever since D-Wave arrived on the scene with a type of quantum computer capable of performing a problem-solving process called annealing, questions have flown thick and fast over whether or not the system really functioned — and, if it did function, whether it was actually performing quantum computing. A new paper by researchers who have spent time with the D-Wave system appears to virtually settle this question — the D-Wave system appears to actually perform quantum annealing. It would therefore be the first real quantum computer.
Up until now, it’s been theorized that D-Wave might be a simulator of a quantum computer based on some less-than-clear benchmark results. This new data seems to disprove that theory. Why? Because it shows evidence of entanglement. Quantum entanglement refers to a state in which two distinct qubits (two units of quantum information) become linked. If you measure the value of one entangled qubit as 0, its partner will also measure 0. Measure a 1 at the first qubit, and the second qubit will also contain a 1, with no evidence of communication between them.
Researchers working with a D-Wave system have now illustrated that D-Wave qubit pairs become entangled, as did an entire set of eight qubits. (The D-Wave uses blocks of eight qubits, as shown below). [DOI: http://dx.doi.org/10.1103/PhysRevX.4.021041 - "Entanglement in a Quantum Annealing Processor"]
D-Wave 2
The D-Wave 2 Vesuvius chip, with 512 qubits
Assuming the experimental evidence holds up, this fundamentally shifts the burden of proof from “Prove D-Wave is quantum,” to “Prove the D-Wave isn’t quantum.” Evidence of entanglement is the gold standard for whether or not a system is actually performing quantum computing.

So, now what?

Now that we have confirmation that D-Wave is a quantum computer (or at least, as close to confirmation as we can likely get), the question is, how do we improve it? As we’ve previously covered, the D-Wave isn’t always faster than a well-tuned classical system. Instead of arguing over whether or not an Nvidia Tesla GPU cluster with customized software is a better or worse investment than a supercomputer that’s cryogenically cooled and computes via niobium loops, we’re going to look at what D-Wave needs to do to improve the capabilities of its own system. As Ars Technica points out, its architecture is less than ideal — for some problems, D-Wave can only offer less than 100 effective qubits despite some newer systems having 512 qubits in total, because its architecture is only sparsely connected. Each group of eight qubits connects to itself, but each island of eight qubits has just eight connections to two other adjacent qubits.
The D-Wave Two's cryogenic cooling system. There's a qubit chip in there, somewhere.
The D-Wave Two’s cryogenic cooling system. There’s a qubit chip in there, somewhere.
D-Wave has stated that it intends to continue increasing the number of qubits it offers in a system, but we can’t help wondering if the company would see better performance if it managed to scale up the number of interconnects between the qubit islands. A quantum system with 512 qubits but more than just two connections to other islands might allow for much more efficient problem modeling and better overall performance.
Inevitably this kind of questioning turns to the topic of when we’ll see this kind of technology in common usage — but the answer, for now, is “you won’t.” There are a number of reasons why quantum computing may never revolutionize personal computing, many of them related to the fact that it relies on large amounts of liquid nitrogen. According to D-Wave’s documents for initial deployments, its first systems in 2010 required 140L of LN2 to initially fill and boiled off about 3L of fluid a day. Total tank capacity was 38L, which required twice-weekly fill-ups. The Elan2 LN2 production system is designed to produce liquid nitrogen in an office setting and can apparently create about 5L of LN2 per day at an initial cost of $9500. [Read: Google’s Quantum Computing Playground turns your PC into a quantum computer.]
Did I mention that you have to pay attention to Earth’s magnetic field when installing a D-Wave system, the early systems created about 75dB of noise, and it weighs 11,000 pounds? Many of these issues confronted early computers as well, but the LN2 issue is critical — quantum computing, for now, requires such temperatures — and unless we can figure out a way to bring these systems up to something like ambient air temperature, they’ll never fly for personal use. Rest assured that lots of research is being done on the topic of room-temperature qubits, though!

Intel stuck with $1.45 billion fine in Europe for unfair and damaging practices against AMD

AMD vs. P4

For years, Intel has quietly fought a battle to dodge the EU’s ruling that it abused its dominant market position and damaged AMD. Today, it lost that fight. The EU has affirmed that the case was properly decided in 2009, and that the fine of 1.06 billion euros (around $1.45 billion) was proportionate. This fine is in addition to the $1.25 billion settlement that Intel ponied up in 2009 following an FTC investigation in the US.
The EU’s second-highest court states: “The General Court considers that none of the arguments raised by Intel supports the conclusion that the fine imposed is disproportionate. On the contrary, it must be considered that that fine is appropriate in the light of the facts of the case.”
Intel, obviously, disagrees and has strenuously argued the contrary — but a review of the 542-page EU findings of fact in the case make that argument untenable.
The EU found, in part:
  • That Intel paid rebates to manufacturers on the condition that they would buy all (Dell) or nearly all of their CPUs from Intel.
  • That it paid retail stores rebates to only stock x86 parts.
  • That it paid computer manufacturers to halt or delay the launch of AMD hardware, including Dell, Acer, Lenovo, and NEC.
  • That it restricted sales of AMD CPUs based on business segment and market. OEMs were given permission to sell higher percentages of AMD desktop chips, but were required to buy up to 95% of business processors from Intel. At least one manufacturer was forbidden to sell AMD notebook chips at all.
Chipzilla vs. AMDIntel then further restricted manufacturer sales by only allowing the 5% of business systems to be sold to small and medium enterprises, only via direct distribution, and only if the business distributor pushed back the launch a further six months.
Reuters has a further point-by-point list of the various findings.

The difference between “aggressive competition” and “antitrust abuse”

One of the differences between the EU and US judicial systems is that in the US, AMD would have had to prove that consumers were directly harmed by Intel’s actions. In the EU, consumer harm is not required to prove market abuse — simply that a company abused its dominant position and distorted the competitive market. Given the scope and scale of Intel’s actions, it’d be difficult to argue that this didn’t happen.
Intel ramped up these behaviors in the early 2000s when the Pentium 3 had run out of gas and the P4 was floundering. It began leaning on them even more aggressively once Opteron and the Athlon 64 launched — Intel executives are on-record as referring to Dell as “the best friend money can buy.” Elsewhere, Intel openly acknowledges that it used rebates (also called the MCP, Meet Comp Program) to keep OEMs away from AMD.
Intel - AMD desktop share
Intel gave AMD share in desktops, where margins were the smallest.
At one point, a Dell executive notes that Intel executives “are prepared for an all-out war if Dell joins the AMD exodus. We get ZERO MCP for at least one quarter while Intel ‘investigates the details’ … We’ll also have to bite and scratch to hold 50%, including a commitment to NOT ship in Corporate. If we go in Opti, they cut it to <20% and use the added MCP to compete against us.” (Opti meaning Optiplex, Dell’s business segment). Note that Intel isn’t just threatening to withhold payment — it’s telling Dell it’ll take the MCP money the company would’ve gotten, and give it to Dell’s competitors.
In the past, I’ve gone after OEM manufacturers for failing to take any kind of risks in product marketing and creating an abominable laptop market, but there is another side to this particular coin. Intel’s own policies created and enforced a situation in which OEMs were ultimately trapped in a cutthroat race-to-the-bottom scenario — if Dell gave up marketingfunds, HP could take those funds, cut its prices lower to compensate, and then be undercutting Dell. In its original antitrust filing, AMD noted that it tried to give HP a millionfree processors at one point, only to be told that HP was so dependent on Intel rebates, it couldn’t afford to take them.
The table below shows Dell’s MCP payment receipts from Intel according to SEC filings. The FTC conducted its own investigation as well.
Intel - Dell payments
Note that beginning in 2005, when AMD launched dual-core Athlon 64 processors and was seriously hammering Prescott, Intel’s payments sharply increased.

A winning strategy

It’s hard to ignore the fact that in the long run, Intel got exactly what it wanted. AMD certainly deserves blame for its own mistakes — overpaying by 2x for ATI was a colossal blunder — but Intel still systemically disenfranchised its primary competitor from gaining market share. Whether this was bad for consumers depends on whether you think the Pentium D (Prescott) and dual-core Smithfield were bad components. From my perspective as a CPU reviewer, these were the nadir of Intel’s competitive position, and yes — AMD’s Athlon 64 and Opteron hardware deserved a better shot than it got.
Intel’s $1.25 billion payout to AMD in 2009 probably fairly represents the profit AMD might have taken home over the same intervening period, but this was never about profits — it was about time. And no matter what Intel ended up paying to the EU, it won time from AMD — and AMD was forced to spin off GlobalFoundries in an attempt to accelerate its own product roadmaps.
Of course, in the long run, one might argue that Intel’s myopic focus on stuffing AMD and dominating the PC channel blinded it to the threat that was coming up behind. Today, Santa Clara dominates its conventional markets, but continues to struggle to break into new ones. That’s cold comfort for AMD, but it neatly illustrates how even an industry-dominating titan can be caught unawares by events outside its direct control.

With Zelda and other great games, Nintendo can turn the Wii U around – just like the PS3

Wii U
Earlier this week, Nintendo did an impressive job revealing some important first-party games for the Wii U. Zelda and Smash Bros. are huge titles that will definitely sell systems, but what was most impressive was a number of mid-size announcements that really show off what the Wii U can actually achieve. If Nintendo can nail the execution of these games, the Wii U might actually turn around in 2015 — the Wii U could actually be in with a chance of standing alongside the Xbox One and PS4.
After the Wii hype train started to die down in the late 2000s, Nintendo began to struggle. The 3DS had to fight against the ever-growing market share of smartphones and tablets, and Wii U sales have been abysmal so far. Nintendo eventually got the 3DS back on the right track, but can it do the same thing with the Wii U? Can a number of hit first-party titles really turn the tables in Nintendo’s favor? After seeing Nintendo’s impressive showing at E3 2014, I think the answer to those questions is “Yes.”
There’s reason to believe that Nintendo is doing right by its biggest franchises. An adaptor was recently announced that allows Gamecube controllers to be used on the Wii U — specifically designed for diehard Smash Bros. fans. Everything Eiji Aonuma said about the upcoming Zelda game points to a sincere reimagining of Nintendo’s most beloved franchise with a heavy Skyrim influence. Besides, Super Mario 3D World and the recent Mario Kart 8release were both well-received across the board, so it’s clear Nintendo still has what it takes to maintain the games that matter the most to its fan base.
Until recently, Nintendo has largely ignored the Wii U gamepad’s touchscreen. Most of the time, the best you can hope for is off-TV gameplay or perhaps a touch-based inventory. Thankfully, it seems like Nintendo is finally taking advantage of this massive touchscreen in the center of the controller. Mario Maker uses the touchscreen to allow gamers to design their very own Super Mario Bros. levels. Sure, that could theoretically be done with a D-pad or analog stick, but it would be clunky and incredibly slow. Instead, you can tap and drag objects into place, and start playing your custom level with a push of a button.
Add that to the announcement of Kirby and the Rainbow Curse, and the touchscreen is finally starting to make sense. In spite of its business failures, Nintendo is starting to make good on the promises of the Wii U’s unique hardware. It doesn’t have the horsepower of the PS4 or Xbox One, but it does have a lot of untapped novelty.

Thermo-acoustic nuclear fuel rods could scream for help when stressed, preventing nuclear meltdown

nuclear whistle head


It’s been more than three years since the Fukushima Daiichi power plant melted down, and despite nearly unlimited interest from the public we still have virtually no idea what’s going on in there.Detailed findings report the conditions immediately around the plant, beneath it in the soil, and above it in the atmosphere — but the core of the nuclear power station is so heavily shielded that its status remains largely unknown to this day. Experimental rad-shielded robots inch a bit further in every month, but their progress is slow. Scientists are desperate enough that they’re trying to make use of passing cosmic rays, which are occasionally powerful enough to pass through the core and ferry out some precious intel. But why are we just figuring this out now?
new technology developed by academics and the Westinghouse nuclear company could keep this from being a problem in the future. When temperatures or pressures start to fluctuate, their new thermo-acoustic devices emit a corresponding auditory frequency that can be interpreted in real-time — it naturally whistles its status, in other words. Rather than use some complex monitoring rig that would fail in the intense environment of a workingnuclear reactor, these thermo-acoustic sensors are based on passive physical forces. Changes in temperature, pressure, or even radiation dosage around the device cause natural shifts in resonant frequency — and thus, the tone of the whistle.
A simple diagram of the thermo-acoustic sensor.
A simple diagram of the thermo-acoustic sensor.
By necessity, the design is as simple as can be. A resonator (long hollow rod) abuts a series of small parallel chambers called the stack. Temperature or pressure differentials across the stack, or changes in the rod’s physical shape due to intense radiation, produce predictable changes in the frequency of resonance inside the resonator — that frequency is our output information. This thermo-acoustic nuclear sensor uses 1100 parallel chambers made of a durable ceramic, and can be made small enough to fit virtually anywhere within the reactor core. Most interestingly, the team suggests that their monitors could be built into nuclear fuel rods themselves, turning fuel containers into sensors that intrinsically report changes without needing any outside power or supervision.
These simple devices would only be able to monitor one attribute of the core in one location, so an array of specialized thermo-acoustic sensors would be placed throughout the core to monitor different variables. Resonators of specifically tailored lengths and designs would produce a multi-voice chorus in a reactor, providing nuanced, real-time information with no need for energy input. If there’s any justice in the world, these scientists will at least try to tailor any “meltdown” frequencies to sound ominous and panicked, or perhaps like the monolith from 2001.
Stored nuclear fuel rods glow an eery, distinctive blue.
Stored nuclear fuel rods glow an eery, distinctive blue.
Regardless, if Fukushima had sported these devices from the start, we would almost certainly know much more about the state of its core today. The preference in nuclear engineering is now for these sorts of “passive” safety measures which rely on relatively fool-proof principles like thermodynamics. Ideas like “freeze plugs,” which require active cooling of a stopper which otherwise melts and totally drains the reactor due to gravity alone, represent the kind of fool-proof design we demand of nuclear technology these days.
While we of course want to keep watch for any cascading “meltdown” reactions, Fukushima’s nightmare scenario showed just how bad things need to get to foul up a modern reactor; the team sees their sensors as ultimately more useful to fundamental nuclear research. If a good portion of nuclear reactors were providing detailed records to some giant national database, analysts could probably derive useful suggestions for safety or efficiency upgrades.
This sort of functionality has been tantalizingly close for a while, being technically possible but infeasible in the real world due to costs and the high rate of equipment destruction. With simple designs like this, nuclear companies like Westinghouse could finally be able to look inside the fuel rods in an average nuclear plant directly and in real time. That’s the sort of upgrade you install quietly, hoping nobody notices that it wasn’t there all along.

Saturday 14 June 2014

The History of the Official World Cup Match Balls

 
Telstar : Mexico, 1970
Adidas started to make soccer balls in 1963 but made the first official FIFA World Cup ball in 1970. This is the first ball used in the World Cup to use the Buckminster type of design. Also, the first ball with 32 black and white panels. The TELSTAR was more visible on black and white televisions (1970 FIFA World Cup Mexico™ was the first to be broadcast live on television). 
 
Telstar Duralast 1974
Telstar Durlast R : West Germany, 1974  
Two match balls were used in 1974 – adidas Telstar was updated with new black branding replacing the gold branding and a new all-white version of Telstar named adidasChile was introduced. 1974 was also the first time World Cup match balls could carry names and logos.
 
Tango Durlast: Argentina, 1978 The ball design represented elegance, dynamism and passion.  
The 1978 match ball included 20 panels with triads that created an optical impression of 12 identical circles. The Tango inspired the match ball design for the following five World Cup tournaments.
 
Tango Espana: Spain, 1982
Adidas introduced a new ball which had rubber inlaid over the seams to prevent water from seeping through. The first ball with water-resistant qualities. General wear from kicking however meant the rubber began to wear after a short time and needed to be replaced during the game. The last genuine leather world cup ball.
 
Azteca : México, 1986
The FIFA World Cup Mexico, saw the introduction of the first polyurethane coated ball which was rain-resistant. The first synthetic match ball, with good qualities on hard and wet surfaces. 
The ball was the first to include designs inspired by the host nation. The Azteca was elegantly decorated with designs inspired by Mexico’s Aztec architecture and murals.
 
Etrusco:  Italy, 1990  
The first ball with an internal layer of black polyurethane foam. The name and design paid homage to Italy’s history and the fine art of the Etruscans.
 
Questra: USA, 1994
FIFA World Cup USA, 1994, official ball which was enveloped in a layer of polystyrene foam.
This not only made it more waterproof but allowed the ball greater acceleration when kicked. The new game ball felt softer to the touch. Improved ball control and higher velocity during play.
The ball’s design represents space technology, high velocity rockets and America’s “quest for the stars.”
 
Tricolore: France, 1998
By 1998, FIFA World Cup France was played with a ball which sported the French red-white-blue tri-color. A complete departure from the old traditional black and white pattern.  The first official World Cup colored soccer ball.  The TRICOLORE used underglass print technology with a thin layer of syntactic foam.
ICON Women's World Cup 1999 Match BallThe first ball specially designed for the Women's World Cup  
Fevernova TM : Korea Japan, 2002
For FIFA World Cup Korea Japan, 2002, Adidas created a new ball made up of thicker inner layers to increase the accuracy of the ball in flight. Fevernova included a refined syntactic foam layer that allowed for more precise and predictable flight path. Asian culture inspired the revolutionary colorful look.
Fevernova design for USA Women's World Cup 2003
World_Cup_Final_Golden_Ball_06
Click on the picture for more info.
Teamgeist Germany, Berlin and Final Ball 2006    
A radically new configuration reduced the amount of panel touch points forming a smooth and perfectly round exterior that improved accuracy and control. Prior to the Teamgeist, the surface of World Cup match balls had notable differences depending on where a player would strike the ball due to seams, ridges and other imperfections where panels come together. The revolutionary propeller design of the Teamgeist minimized corners and created a smoother surface for improved play. The ball was designed with traditional colors of the German flag and was accentuated with the golden color of the World Cup trophy. 
Click on the picture for more info.

Adidas unveils Match Ball for 2007 FIFA Women’s World Cup™

Jabulani - the Official Match Ball for the 2010 FIFA World Cup South Africa.
The Jabulani featured a new grip n groove technology that provided players a ball with stable flight and grip under all conditions. With eight thermally bonded 3-D panels that were spherically molded for the first time, the Jabulani was more round and accurate than its predecessors.
Jo'bulani - the Gold Final Official Match Ball for the 2010 FIFA World Cup South Africa.
2011 Women's World Cup Official Match Ball - SpeedCell
 
Brazuca was confirmed as the match ball name after a public vote in Brazil, participated in by more than one million soccer fans in the host country. 
Historic Soccer Balls

Disqus

comments powered by Disqus