sexta-feira, 29 de julho de 2016

5G wireless is coming, and it’s going to blow you away

David Talbot

A massive FCC spectrum release—and new advances in wireless technologies—accelerate an era of incredibly fast data.
Mobile data consumption is soaring, but a broad set of technology advances is poised to transform what today’s smartphones and other wireless mobile devices can do—ushering in high-resolution video and fully immersive, 3-D environments.
At the NYU Wireless lab in Brooklyn, students are testing prototype equipment—forerunners to next-generation phones—that are able to transmit a blazing 10 gigabits of data per second, all while moving around crowded courtyards. And Samsung recently showed how a car traveling at 25 kilometers per hour could maintain a gigabit-per-second connection as the car moved in and out of range of mobile transmitters called base stations. 
Both achievements are roughly 100 times faster than what current commercial mobile phone technology can do. 
These are demonstrations of the kinds of astonishing capabilities that will be unleashed thanks to this month’s release of vast amounts of high-frequency spectrum by the U.S. Federal Communications Commission—a move that will make available several times more spectrum than has ever existed for wireless telecommunications—and a $400 million research effort announced by the White House.
The next-generation technology will eventually be defined in a standard that will be known as “5G.” It is expected to provide Internet connections at least 40 times faster—and with at least four times more coverage worldwide—than the current standard, known as 4G LTE. 
This Intel chip, called a "massive antenna array," includes 64 antennas and can be expanded to 256, allowing ultra-high-capacity millimeter-wave frequencies to be sent in specific directions.
The new technology is expected to use so-called “millimeter wave” radio spectrum, or wavelengths above 24 gigahertz. The FCC’s move in mid-July made the United States the first country to make far more of this spectrum available for commercial use, as opposed to primarily for radar and military systems.
Higher-frequencies carry significantly more data. But they are also far more easily blocked by buildings, foliage, and even rain, making their use for mobile communications quite challenging (some existing systems use these frequencies for fixed point-to-point wireless connections with clear lines of sight).
But thanks to advances in signal processing, chips, and antenna technologies, Samsung, AT&T, Verizon, Ericsson, and other companies will be able to use this spectrum for next-generation mobile connectivity.
Already, some startups are using these tricks to pursue new business models. One is Starry, a company beta testing a home Internet access service in Boston. But such efforts are intended for stationary devices.
The NYU and other demos are showing how millimeter wave signals can be used for mobile communications and get around the biggest problem: they’re blocked by objects that come between transmitter and receiver.
Arrays of tiny antennas on chips or on miniature circuit boards can “steer” a signal in specific directions and mitigate this downside. This is known as “phased array”; Samsung, for example, has already prototyped a 32-antenna phased array in handheld wireless devices. Samsung, Ericsson, and Nokia all have equipment they are preparing for trials.
“There’s a tremendous amount of work being done at all the major telecom companies, big and small. You see a lot of good activity happening throughout the industry, realizing that the millimeter wave future is coming very, very quickly,” says Ted Rappaport, who heads wireless research at NYU.  
The first commercially available handsets with such technology could appear in two to five years. “I call this the renaissance of wireless. There is a confluence of events that will change the world much faster than anybody believed a few years ago,” Rappaport says. 
Underpinning the new wireless technologies are remarkable advances in microchips. First, the smaller feature size on chips will allow much more data processing without killing off your battery. And second, such chips are being overlaid with a second layer of materials that act as antennas, minimizing signal loss and energy consumption.
Manufacturing advances are making these advanced capabilities possible on standard silicon, paving the way for cheap consumer devices, says Ken Stewart, chief wireless technologist at Intel. “What the consumer will see are ever richer experiences and high-resolution video on mobile devices,” he says. “Instead of playing Pokémon Go while watching phone screens, they’ll be doing it in fully immersive, 3-D environments with fast refresh rates.” 
The groundswell of activity comes amid exponential growth in wireless data demands as billions of people expect more capacity in their mobile devices. Additional demand will come from machines like networked cars and smart power grids.

How the media will rise in the face of the digital revolution

 Raghav Bahl

The media is passing through an awkward digital adolescence. With falling revenues and smaller newsrooms, the industry is being squeezed into an unfamiliar online space against its will. Publications with hundred-year pedigrees are having to rethink and relearn their trade from the ground up.
The industry has been in a downturn since 2007 and, while some publications have been plodding on, hemorrhaging cash and complaining, others — like The New York Times — have been actively experimenting with new business models in order to turn things around. The publication’s latest venture, producing a Spanish language version for Latin America, shows it has eyes on an expanding, global future.
I am optimistic that the industry will prevail, despite the naysayers. So how will the media continue to adapt to the digital revolution and monetize its content for the age of online sharing? And how might platforms like Facebook, Google and Apple reinvent our concept of the exclusive?

The challenges and opportunities of the digital age

Before the great digital expansion, broadcasters and publishers had a fairly captive audience.
Viewers were limited to the few television channels they had available or the newspapers they bought. Higher ratings and wider circulations meant bigger ad revenues, and distribution advantages gave the broadcaster and publisher greater power to monetize. Today, those advantages have all but vanished; consumers have free and easy access to many channels and are always just one click away from new content.
Nowadays, audiences are less likely to head directly to destination news outlets. Instead, they are discovering this content on social media — 63 percent of Facebook and Twitter users say they access news on the social networks, according to the Pew Research Center. This is exposing social media users to a whole range of varied content. Not only are media outlets forced to write stories that stand out from the crowd, but they also must cater to an entirely new kind of consumer.
The audience is no longer trapped in a demographic or geographical bubble.
In response, publications and marketers are creating bite-sized, easily digestible and shareable content that comes in the form of listicles, FAQs, photo essays, video content and so on. We’re also seeing a rise in clickbait — sensationalist content that attempts to lure readers with over-the-top claims, compelling imagery and shock tactics, ultimately to sell advertising.
While the reader might enjoy the content, he or she doesn’t value that which lacks substance. Nevertheless, this type of content won’t disappear; it is perhaps not unlike poor-quality tabloids versus quality broadsheets in the old days. It falls into the category of the ephemeral and of mindless fun — something that has always been popular.
However, I am optimistic that these changes will bring about an evolution of the industry, rather than its demise. The fact is, there are advantages to the new, digital world.
For one, broadcasters and publishers now have a far cheaper distribution system. When I first began distributing television content and newspapers in 1985, distribution costs could reach a staggering 25 percent. Trying to reach an audience of over one billion — as several YouTube videos have now done — would have been unthinkable.
Of course, consumers can click away from your content, but they also come to your product in the same way. As The New York Times has realized with its new foreign language ventures, the audience is no longer trapped in a demographic or geographical bubble — it can be global. Viewers, readers and consumers can now access online content from anywhere, at any time. These advantages will start accumulating and will become more advantageous as time goes by.

How digital content will be monetized

Publishers are now making content for a new generation of younger consumers. Pew Research reports that newspaper digital readership increased more than twice as fast as the overall internet audience in the age groups of 18-24, 25-34 and 35-44.
Despite the prevalence of short-form content, millennial readers are in fact voracious readers. What’s more, deep, meaningful content of more than 3,000 words is more likely to be shared, and that longer-form article gives marketers more conversions. This is good news for journalists and publishers, as advertisers will once again value long-form content, and will likely pay more for content that drives more leads.
Branded and native ad content can be seamlessly interwoven with quality journalistic content — through words, sound and moving pictures — and delivered on a nonlinear mobile platform. There is no doubt that this form of advertising content will increase in both relevance and in volume, and will replace more traditional forms of advertising. In this regard, we can draw a parallel to the old days, when advertising column inches in newspapers became 30-second commercials — now a staple of the television experience.Furthermore, the marriage of print and mobile is happening. Thanks to the prevalence of social channels, media is becoming ever more personal, and will from now on be consumed on a handheld device. This new animal will be fed with a new advertising format; multi-media and targeted ads are already driving revenues for some publishers. This will only increase.
It’s time for publications to embrace the digital revolution, because it is only going to make them stronger.
Despite broad changes to advertising revenue sources, this does not mean an end for recognizable subscription models. What we will see instead is a qualitative split in the media.
On one hand we will have short-form content pulling in low-margin advertising revenue; on the other, we will have in-depth, insightful reporting that adds value to the readership and commands both subscription fees and far higher advertising revenue.
Subscription models are sustainable for powerful media brands like The New York Times or The Economist. An established, loyal readership is always willing to pay, as the rising number of digital subscribers attests. And where less-well-known outlets — like Pando, for example — provide in-depth, original reporting, consumers will part with their cash. Subscription models will also flourish when media outlets provide utility and advice — for entrepreneurs and investors, for example.

What does the future look like?

There is an even bigger change on the horizon. We must look to digital platforms like Facebook, Google and Amazon for the future of exclusive news content.
Facebook, the biggest of the bunch, has recently allowed users to monetize its news feed with video content — giving advertisers a large percentage of the profits. This paradigm change will shift even further as this monopoly is challenged by other large platforms joining the race.
When Google news, Apple and even Amazon follow suit, the power of the exclusive content provider will go up. Platforms will begin vying for exclusive, monetized content, increasing the value of the product and making publishers more powerful, in turn.
Despite fears that it’s all over for the media, I argue the opposite — we are simply in a difficult transition from which we will emerge stronger and better. Along with the intensely crowded clickbait and popular quickie-content market, we will continue to see serious, quality journalism. The amazing cost advantages brought by free (or nearly free) distribution channels, and a growing global audience, means publishers can continue to operate and good content will still rise to the top.
And when monetized social media content really takes off, we are likely to see a return to exclusive media and a huge boost in ad revenues, shepherded in by the biggest social and commercial online platforms out there. It’s time for publications to embrace the digital revolution, because it is only going to make them stronger.

quinta-feira, 28 de julho de 2016

Stuffed quinoa peppers - Mayo Clinic

By Mayo Clinic Staff

Dietitian's tip:

Stuffed peppers are often made with ground meat. This meatless version is satisfying, thanks to a hearty mix of quinoa, walnuts, tomatoes and zucchini.

Number of servings

Serves 4


  1. 2 large bell peppers, cut in half
  2. 1 cup uncooked quinoa
  3. 1 tablespoon olive oil
  4. 1 cup chopped zucchini
  5. 6 Roma tomatoes, chopped
  6. 1/2 cup chopped onion
  7. 1 stalk celery, chopped
  8. 2 tablespoons chopped walnuts
  9. 3 garlic cloves, chopped
  10. 2 teaspoons chopped fresh thyme
  11. 1 teaspoon salt
  12. 1/2 teaspoon ground black pepper
  13. 1/2 teaspoon Parmesan cheese


Heat the oven to 350 F. Cook quinoa according to package directions. Set aside. Heat a large saute pan to medium-high heat. Add oil, zucchini, tomatoes, onion, celery, walnuts, garlic and thyme. Reduce heat to medium after 2 minutes. Once vegetables are tender, add cooked quinoa, salt, pepper and cheese. Place the pepper halves on a baking sheet. Place 1/2 cup of the quinoa and vegetable mixture into each pepper half. Cover with foil and bake for 15 to 20 minutes. Uncover and bake for an additional 5 minutes until peppers are cooked to desired doneness.

Nutritional analysis per serving

Serving size :1 pepper half

  • Calories 313
  • Total fat 13 g
  • Saturated fat 3 g
  • Trans fat 0 g
  • Monounsaturated fat 4 g
  • Cholesterol 9 mg
  • Sodium 674 mg
  • Total carbohydrate 38 g
  • Dietary fiber 7 g
  • Total sugars 10 g
  • Protein 13 g
Created by the executive wellness chef and registered dietitians at the Mayo Clinic Healthy Living Program.
April 19, 2016

How to Color Black-and-White Photos With One Click



Written by Nancy Messieh July 28, 2016

There’s quite a few apps and services out there that make it easy to convert color photos to black and white or to retain just a splash of color. But what if you wanted to go in the opposite direction and add color to a black and white photo?
There aren’t many easy ways to do this, but thanks to a new service from Algorithma, it just got a lot easier.
After you load the page, you can either paste an URL of a black and white photo available online (make sure the URL ends with .jpg for it to work) or upload an image from your computer and hit the Colorize It button. And that’s it!
Using an algorithm, the website will add color to the image using deep learning to classify the various elements within the image, and the add color based on that categorization. On the website, you can drag a line splitting the color and black and white versions to see how it changes.
So how good is the colorization? It definitely varies from image to image. With some black and white photos, you’ll get a very subtle, sepia colorization that makes it look like a vintage photo with subtle hints of color. With other photos that have a large amount of sky or grass in them, you get an impressively accurate rendering.
Using this landscape photo from Pixabay for example....

You can download the colorized version of the image (which will have a small ColorizeIt logo in the bottom right-hand corner), or you can download the comparison at about 800 pixels wide.
Wherever you place the purple line is where it will appear as a static element in your download. There doesn’t appear, however, to be a way to embed or share the image with the interactive element maintained.
What do you think of the service? Let us know in the comments. 

quarta-feira, 27 de julho de 2016

Treating pain without feeding addiction

 Study shows promise of non-drug pain management: Approach could help confront nation's opioid addiction epidemic

It's a Catch-22 with potentially deadly consequences: People trying to overcome addiction can't get treatment for their pain, because the most powerful pain medicines also carry an addiction risk.
And so their pain continues to get in the way of their addiction recovery -- or they seek pain relief in the same addictive substances they're trying to avoid.
But a new study shows the potential for patients to break out of that cycle through a non-drug approach that combines behavioral therapy and social support to help them manage their pain. The low-cost approach, grounded in psychological theories of pain, could help address the nation's epidemic of addictions to opioid painkillers and illicit drugs.
Veterans who received this pain-focused care while also being treated for addiction found that the intensity of their pain decreased, their ability to function increased, and their alcohol use went down, compared to veterans who received a less-focused approach. However, the two groups had similar rates of drug use.
Just 10 weekly sessions of the approach, called ImPAT for Improving Pain during Addiction Treatment, had an effect that lasted up to a year in 55 veterans who took part, according to the new results published in the journal Addiction by a team from the VA Ann Arbor Healthcare System's Center for Clinical Management Research and University of Michigan Medical School's Addiction Center.
The researchers have already launched a follow-up study in a larger group of 480 non-veterans in a residential addiction treatment program. And the study's authors note that the ImPAT approach has the potential to be easily and inexpensively adopted by addiction treatment centers and groups worldwide, through team members trained in standard psychological techniques.

Caught in the middle
"These results highlight the need for addiction treatment programs to offer a multifaceted approach that doesn't only address substance use but also the other factors that might be driving substance use, including pain," says Mark Ilgen, Ph.D., the study's lead author and a VA and U-M psychologist specializing in addiction research. "We've shown that it's possible to improve pain outcomes in people with addiction, and even have some spillover effects on their substance use."
Addiction treatment programs often have patients who suffer from chronic pain, but offer few options to treat them, Ilgen says.
To make matters worse, "Past studies of psychosocial approaches for pain have often excluded people with drug or alcohol problems, addiction treatment programs do not usually have providers trained in pain care, and many pain specialists will not treat people who also have addiction. So patients are caught in the middle."
All 129 patients in the study, most of them men in their 40s and 50s, were receiving outpatient addiction treatment in a CBT-based, non-abstinence setting at the Ann Arbor VA. Half were randomly assigned to ImPAT sessions, the other half to support groups of peers, led by a therapist, where pain and addiction could be discussed.

Combination approach
ImPAT combines elements of cognitive behavioral therapy with another psychosocial approach called acceptance and commitment therapy.
While the two approaches aren't usually used together, they are often used in pain treatment settings -- but those clinics and programs don't often accept people who also acknowledge they have addiction issues. Ilgen and his colleagues hope their results will help bring the techniques into addiction treatment settings, where the cognitive behavioral therapy approach is often used.
The ImPAT technique seeks to use integrated approaches both to help patients focus less on their pain and more on other aspects of life. This includes techniques to help people adapt to their pain, find ways to distract themselves from their pain, and think of ways to function in the face of pain.
"We want to take the focus off pain and put it onto functioning, and finding pleasurable ways to spend time," Ilgen says. "There's also a strong link between depression and pain. Pain is responsive to mood, and mood is responsive to social support."
In an editorial accompanying the new paper, another addiction and pain specialist, William C. Becker, Ph.D. of Yale University and the Connecticut VA, notes that the new results are even more impressive because ImPAT was compared with another psychosocial approach.
The newly published study did show that one in five veterans randomly assigned to ImPAT or the comparison approach did not actually attend any sessions -- likely because of delays in getting the sessions going. The larger ongoing study, based in an abstinence-based treatment facility in Michigan and funded by the National Institute on Drug Abuse, has largely avoided that issue by decreasing the delay between study recruitment and the start of the treatment groups, Ilgen says.
The study in veterans was planned just before the rapid rise in, and increased awareness of, opioid painkiller addiction issues in the U.S. While opioid addiction was one of the issues faced by veterans in the study, most had issues with multiple substances.
The sharp rise in opioid addiction in recent years -- often among people who started taking the painkillers as treatment for acute or chronic pain -- has made the search for effective non-drug pain treatment options even more urgent, Ilgen notes.
"Long-term use of opioids can sometimes lead to a hypersensitivity to pain, so there may actually be a causal link between use of these medications and pain," he notes. "We need to study psychological pain management approaches in opioid-dependent patients, including those receiving addiction therapies such as buprenorphine."
In the meantime, he notes, people struggling with addiction who want to seek relief from pain should explore the full range of treatment options that have been shown to work in non-addicted patients, including physical therapy, exercise and psychotherapy as well as antidepressant medications. And while existing prescription guidelines do not explicitly prohibit the use of opioid painkillers in people with pain who have substance use disorders, these guidelines recommend only using opioids sparingly and under close supervision, he says.

Good Eggs raises $15 million to expand across the U.S.

Lora Kolodny

A San Francisco startup that many people assumed was toast, Good Eggs, has raised $15 million in a new round of venture funding led by Index Ventures to expand its online, organic grocery business first in the San Francisco Bay Area, and later across the U.S.
In recent years, Good Eggs tried to ramp up its eco- and farm-friendly grocery business only to face logistics problems, high costs and lagging customer satisfaction in some markets.
It shut down all its operations outside of the San Francisco Bay Area, laid off employees and retrenched, expanding the assortment of products it offers by more than 1,000 items, and beginning to use third-party logistics services to supplement its fleet of truck drivers to ensure on-time deliveries.
While Good Eggs founder and former CEO Rob Spiro, an ex-Googler, has stayed on in the capacity of a company adviser, the company is now led by long time food and consumer packaged goods executive Bentley Hall who joined the startup late last year.
Hall is an accountant by training who previously served in a number of senior executive roles at Plum Organics, and worked for other mainstream consumer goods manufacturers including Clif Bar and Johnson & Johnson.
The CEO declined to comment on terms of the new deal.
He said while not all of Good Eggs’ previous backers were in the new round, many were, including Index Ventures, which led the round. New investors also joined including firms with a depth of expertise in foodtech, including S2G Ventures and Obvious Ventures, he noted.
Good Eggs faces a huge amount of competition in its quest to link customers who’d fall into the “picky eater” and “foodie” category to all the vegan, gluten-free, paleo-friendly and other ingredients that they crave from local farms and sustainable producers.
Online groceries today in the U.S. range from Instacart or Postmates, which both deliver from brick and mortar businesses, to services from tech titans like Amazon Fresh or Google Express, and other venture-funded sellers of produce like Relay Foods.
Then, there are countless small businesses and regional players like Fresh Direct in New York, Crisp in Chicago, and others that focus on healthy and organic packaged goods snacks, like Thrive Market, LoveWithFood, Naturebox and to name just a few on the national scene.
Beyond the online players, Hall sees traditional brick and mortar groceries like Whole Foods Market or Safeway as competition as well. “A majority of people still shop at brick and mortar. I eager to get people shifting to online grocery and accelerating that shift,” he said.
Good Eggs investors also declined to comment on terms of the new venture funding round, including whether or not the company has taken a “down round,” or lower valuation than it had previously.
Index Ventures’ Danny Rimer said his firm upped its investment in Good Eggs because the company has improved its operations, churn and “basket” size, and can consistently deliver with the level of service quality it needs to support expansion now.
“The original value proposition holds true. Customers want to understand who are the suppliers and creators of the food that they eat and feed their families. They want a relationship with local suppliers and farmers. It’s no longer enough to sell a decent product at a good price,” Rimer said.
In groceries online and off, the “basket” is of paramount concern, generally. The more people buy per order, the better the margins on that order.
He compared Good Eggs to Etsy’s online marketplace for hand-made gifts, accessories and other items. Etsy now sells food but not fresh produce.
Investors expect Good Eggs to focus on scaling in the Bay Area near-term and beyond it longer-term, hiring, and maintaining a high, average order value among customers, Rimer said.
Hall said he believes the startup has an advantage over other players in the space thanks to the software that Spiro and the early Good Eggs team built to run the business including, especially, an app that helps the company conduct speedy sorting, tracking, packing and shipping of a high volume of goods from a wide variety of farms and local vendors.
Corrections & clarifications: Rob Spiro is an advisor to Good Eggs but not a chairman with the company as previously stated. Bentley Hall held several senior executive roles at Plum Organics prior to joining Good Eggs. 

All e-cigarettes emit harmful chemicals, but some emit more than others

American Chemical Society,

While previous studies have found that electronic cigarettes emit toxic compounds, a new study from Lawrence Berkeley National Laboratory (Berkeley Lab) has pinpointed the source of these emissions and shown how factors such as the temperature, type, and age of the device play a role in emission levels, information that could be valuable to both manufacturers and regulators seeking to minimize the health impacts of these increasingly popular devices.
The study, which was published in Environmental Science & Technology, found that the thermal decomposition of propylene glycol and glycerin, two solvents found in most "e-liquids" (the substance that is vaporized by the e-cigarette), leads to emissions of toxic chemicals such as acrolein and formaldehyde.
"Advocates of e-cigarettes say emissions are much lower than from conventional cigarettes, so you're better off using e-cigarettes," said Berkeley Lab researcher and the study's corresponding author Hugo Destaillats. "I would say, that may be true for certain users--for example, long time smokers that cannot quit--but the problem is, it doesn't mean that they're healthy. Regular cigarettes are super unhealthy. E-cigarettes are just unhealthy."
In the paper, "Emissions from electronic cigarettes: Key parameters affecting the release of harmful chemicals," Destaillats and a team of researchers simulated vaping using three types of e-liquids in two different vaporizers operated at various battery power settings. The two e-cigarettes were quite different, one fairly cheap with one heating coil, the other more expensive with two heating coils in parallel. The researchers used gas and liquid chromatography to determine what was in the vapor, looking at the first puffs as well as later puffs after the device heated up and reached a "steady state."

Not all puffs are equal
One finding was that the first and last puffs produce widely varying emissions. Using a custom-built vaping apparatus emulating realistic vaping habits, researchers drew on the e-cigarette by taking puffs lasting 5 seconds every 30 seconds. They found that vapor temperature rose quickly in the first 5 to 10 minutes until reaching a steady state temperature at around the twentieth puff.
Correspondingly, emissions levels between the first few puffs and the steady state increased by a factor of 10 or more in some cases, depending on the device, the battery voltage, and the emitted compound. For example, for acrolein, a severe eye and respiratory irritant, a single-coil e-cigarette operated at 3.8 volts emitted 0.46 micrograms per puff in the first five puffs, but at the steady state it emitted 8.7 micrograms per puff. "When you apply the same voltage to the double-coil e-cigarette you see a lot less emissions," said co-author and Berkeley Lab researcher Lara Gundel. "We think it has to do with lower temperatures at each of the coil surfaces."
For comparison, conventional cigarettes emit 400 to 650 micrograms of acrolein per cigarette, accounting for both mainstream and sidestream emissions. Assuming 20 puffs on an e-cigarette is equivalent to smoking a conventional cigarette, Gundel said, then total emissions of acrolein for an e-cigarette are about 90 to 100 micrograms.
Separately, to test effects due to device aging, researchers used a single device over nine consecutive 50-puff cycles without cleaning. Again, emissions of formaldehyde, acetaldehyde, and acrolein--all either carcinogens or respiratory irritants--increased with usage. "In some cases we saw aldehyde levels increase 60 percent between cycles 1 and 9," said co-author and Berkeley Lab researcher Mohamad Sleiman.
The researchers note in their paper: "This effect is consistent with the buildup of polymerization byproducts on or near the coil leading to accumulation of the sort of residues that are often referred to in the blogosphere as 'coil gunk' or 'caramelization.' Heating these residues would provide a secondary source of volatile aldehydes."
Lastly, because many e-cigarettes allow users to control the voltage, the researchers systematically investigated the effect of voltage on emissions. They found that as the voltage increased, both the amount of e-liquid consumed per puff and the vapor temperature were higher. In the case of acrolein and formaldehyde, the amount formed at the highest voltage of 4.8V was an order of magnitude higher than the amount at the lowest voltage of 3.3V.
Destaillats takes pains to note that the results do not mean that e-cigarettes are safe to use at lower temperatures. "We found there are emissions of toxic chemicals at any temperature at which you use the device," he said. "And the higher the temperature, the more emissions."

Two new carcinogens detected
Because there is an immense variety of e-cigarettes as well as e-liquids, the Berkeley Lab researchers decided to focus on an element that is common to all of them: the solvent in the e-liquid. Almost all e-liquids use a combination of propylene glycol and glycerin in varying proportions as a solvent.
"Both are used for making artificial smoke on stage," Destaillats said. "The ratio between the two determines things like the volume of vapor cloud that you produce. They are considered safe for food."
However, there have been few if any studies on the safety of heating and inhaling propylene glycol and glycerin. "People are not drinking the liquids--they're vaping them," said Sleiman. "So what counts is the vapor."
The researchers vaporized liquids consisting solely of the solvents to verify that they were the source of the emissions. In all, the researchers detected significant levels of 31 harmful chemical compounds, including two that had never been previously found in e-cigarette vapor--propylene oxide and glycidol, both of which are probable carcinogens.
"Understanding how these compounds are formed is very important," Destaillats said. "One reason is for regulatory purposes, and the second is, if you want to manufacture a less harmful e-cigarette, you have to understand what the main sources of these carcinogens are."

Samsung HDR update improves the color in its high-end TVs

As a reminder HDR, or "high dynamic range" video expands the maximum light and dark levels and number of shades in between. As a result, you can see more detail in shadows and light areas along with a wider color gamut. Samsung's SUHD TVs also have 10-bit panels, meaning they can display up to a billion colors -- many times more than a regular HDTV -- if the source content supports it.

Samsung says the new feature "uncovers previously hidden images from darker content scenes" with native HDR video, while increasing the contrast ratio in brighter scenes to to add more detail. It adds that picture experts can now customize brightness, contrast and gradation "to the finest degree." The upgrade is available for all of Samsung's 2016 SUHD TVs and will roll out to other HDR models by September.

Getting tomatoes to rippen without going soft


Soft, juicy, delicious tomatoes were a feature of my childhood and are still available from the plants I grow each summer. However, they've largely vanished from stores. The ripe fruits don't hold up well to shipping, so producers have focused on growing variants where mutations have partially blocked the ripening process. These tomatoes stay firm longer, but it comes at the cost of texture and flavor—as well as a decline in their nutritional value.

Now, researchers seem to have identified an enzyme that specifically helps soften the tomato during the ripening process. By knocking its activity down, they've interfered with softening while leaving other aspects of the ripening process intact. The result is a ripe fruit that can sit at room temperature for two weeks and still remain firm.

In some ways, the surprise of these results isn't that they happened; it's that they took so long. A high-quality tomato genome sequence was first published in 2012, and it allowed researchers to identify more than 50 genes that were likely to encode proteins that could modify the plant cell wall. Four of these genes appeared to be active at high levels in the ripening fruit, and so these genes were targeted through genetic engineering.
Read 9 remaining paragraphs | Comments

Mercedes unveils world's first completely electric semi truck


Like most electric vehicles, the eTruck is relatively whisper-quiet, especially compared to a typical diesel truck. With a weight capacity of 29 US tons (26 metric tonnes), it's the first electric big rig concept to hit the road, beating out the semi Tesla has been working on that it announced last week.

Of course, big rigs move freight across long distances, so the eTruck's current 124-mile maximum range likely won't be adequate for long hauls. But the "Urban" prefix denotes its use case: As a clean, quiet load-bearing vehicle ideal for cities. Daimler has already heavily tested the utility of close-range hauling with its Fuso Canter E-Cell pilot program, sending the all-electric 4.8-tonne capacity light trucks around Portugal last fall. The eTruck scales that concept up to the loads and conditions typically endured by semis.
Daimler envisions that its electric truck won't roll off the assembly lines until early in the next decade, according to their press release. By then, technological improvements will drive battery costs down by a factor of 2.5 and efficiency up by the same metric, the truck company estimates.

6 Human Activities That Pose The Biggest Threat To The World’s Drinking Water

by Natasha Geiling
CREDIT: Shutterstock

Clean, drinkable water is more than a precious resource — it’s crucial to human life. Unfortunately, population growth and pollution are threatening to seriously undermine the availability of clean drinking water in many of the world’s major cities. According to a study published Monday in the Proceedings of the National Academy of Sciences, water treatment costs have risen by 50 percent in a third of large cities around the world.
That means that getting clean, pure drinking water to people has become an increasingly difficult task, requiring cities not only to pay for expensive treatments, but pay for the construction of treatment plants to dole out said expensive treatments. All told, the study estimated that the total cost of degradation to our drinking water — in terms of treatment costs — is around $5.4 billion annually.
The study, which was a joint effort from researchers at the Nature Conservancy, Yale University, and Washington State University, looked specifically at how three kinds of water pollution — sediment, nitrogen, and phosphorus — have degraded the watersheds from which we obtain our drinking water. These kinds of pollution can enter into watersheds for a variety of reasons, but they all come back to one thing — human activity, which can have seriously detrimental impacts on drinking water.
Here are some of the ways that human activity is seriously messing with clean water, both in the United States and around the world.


Agriculture is a huge contributor to water pollution, from fertilizers used for row crops to the manure created by large-scale animal agriculture. In Washington state, a 2015 lawsuit found that a huge dairy operation had been polluting groundwater in a nearby community, causing the level of nitrates in residents’ drinking water to spike to unsafe levels. Nitrates, when found in high levels, can cause serious health problems for both infants and adults with compromised immune systems.
Elsewhere, industrial production of crops like corn and soy, which rely heavily on fertilizers to increase yields, can lead to dangerous algal blooms which, when toxic, can shut down drinking water for entire cities. When fertilizer is over-applied, not all of it can be taken up by crops — the remaining fertilizer ends up as runoff in streams, rivers, and, eventually, lakes and oceans. It’s fertilizer runoff that was largely blamed for the toxic algal bloom that shut down Toledo, Ohio’s drinking water for three days in 2014.
An algae bloom covers Lake Erie near the City of Toledo water intake crib about 2.5 miles off the shore of Curtice, Ohio.
An algae bloom covers Lake Erie near the City of Toledo water intake crib about 2.5 miles off the shore of Curtice, Ohio.

CREDIT: AP Photo/Haraz N. Ghanbari, File

Fossil fuel production

Fossil fuel production is another human activity that places considerable strain on drinking water — and not just because fracking and coal mining use a great deal of water, but because their waste products can pollute groundwater, and therefore drinking water, as well.
With fracking — also known as hydraulic fracturing, when high pressure water, sand, and chemicals are used to break open subsurface shale in order to liberate the natural gas trapped therein — water is a massive component of the entire process. Each fracked well requires somewhere between 1 million and 6 million gallons of water per well, which can place strain on surface water resources. In California, for instance, fracking used 70 million gallons of water in 2014, despite an ongoing drought that forced water restrictions throughout the state. But fracking can also impact water quality well after the actual fracking itself has finished, when waste fluids are injected back underground for disposal. In some cases, that cocktail of wastewater and chemicals can leach into aquifers, polluting the groundwater near fracking operations. That’s what happened in Dimock, Pennsylvania in 2009, when two families sued Cabot Oil & Gas Corp. for polluting their wells with methane. That’s also what happened in 2008, in Pavillion, Wyoming.
But it’s not just natural gas production that can impact water quality — coal consumption can also have serious implications for water quality. Coal ash, the byproduct of burning coal for energy, is one of the largest forms of waste generated in the United States. It also contains toxic contaminants, like arsenic and mercury, as well as radioactive material. In some cases, power companies store coal ash in unlined pits, which can leach into groundwater.
Coal ash pond from an abandoned coal fired power plant.
Coal ash pond from an abandoned coal fired power plant.
CREDIT: AP Photo/Steve Helber

Mountaintop removal mining — a technique where coal mining companies remove tops of mountains to reveal coal — can also degrade the drinking water of communities that live near the mining sites. The process of removing the mountaintop requires lots of heavy machinery and explosives, and results in tons of waste in the form of fallen trees, waste rock, and other debris. That waste is often dumped into sites known as “valley fills,” and the EPA has estimated that these dump sites have buried more than 2,000 miles of streams throughout the Appalachian region, leading to increased pollution downstream.


In some places, population growth has strained wastewater treatment plants to the point where they cannot handle the amount of sewage that is produced by the city or town. In Gauteng, a province in South Africa, for example, population growth has forced two sewage treatment stations and a wastewater treatment plant to run at 200 percent capacity, which in turn causes raw sewage to back up and spill into surrounding fields, contaminating ground and surface water, according to a 2015 investigation by the South African newspaper Mail & Guardian. And in Rio de Janeiro, untreated sewage continues to flow into lagoons and the Guanabara Bay.
In the United States, raw sewage can make its way into lakes and streams during intense rainstorms, when pipes that carry sewage and rainwater in the same system (yes, those still exist, largely in the Eastern United States and around the Great Lakes) fill to capacity, spilling a mix of rainwater and raw sewage into bodies of water.
A flower floats along in a sewage run-off on Brooklyn's Gowanus Canal, in New York.

A flower floats along in a sewage run-off on Brooklyn’s Gowanus Canal, in New York.
CREDIT: AP Photo/Bebeto Matthews


In addition to poop, sewage, fertilizers, and coal ash, the United States’ drinking water might have a drug problem. U.S. health providers — as well as livestock producers — use millions of pounds of pharmaceutical drugs each year, and some of those are ending up in treated drinking water. Antibiotics are a particular concern, because they could lead to antibiotic resistance.
Ninety percent of the pharmaceuticals in our drinking water end up there via excretion, while some find their way into drinking water through improper disposal or manufacturing discharges. Most wastewater treatment facilities aren’t equipped to remove traces of pharmaceuticals from water, and the EPA has not set standards for safe levels of pharmaceuticals in drinking water. The World Health Organization, however, states that the trace levels found in drinking water are unlikely to have adverse health impacts for humans, as they are “several orders of magnitude (typically more, and often much more, than 1000-fold) lower than the minimum therapeutic dose.”


Development and land-use changes — or the changing of land from rural to urban — is a big part of what the PNAS study published Monday looked at when considering the degradation of drinking water. In some places, like in Asia, population density in key watersheds has expanded rapidly since the beginning of the 20th century. Globally, the population density around key urban watershed sources has increased by a factor of 5.4 since 1900, with the most rapid development happening in the last few decades.
The study found that, between 1900 and 2005, sediment pollution increased in watersheds by 40 percent. Development is a massive driver of that pollution — when urbanization or agriculture comes into a watershed, land that was previously covered with native vegetation is cleared. That means that the soil that was once bound by root systems is free to run into waterways when a storm comes along, choking waterways with sediments and damaging both drinking water quality and ecosystems that depend on clean water.
In this July 26, 2013, photo, erosion is seen in a cornfield that was recently converted from pasture near Lineville, Iowa

In this July 26, 2013, photo, erosion is seen in a cornfield that was recently converted from pasture near Lineville, Iowa
CREDIT: AP Photo/Charlie Riedel

Deforestation — which often occurs to make way for agriculture or development — is also a huge contributor to sediment pollution. Wildfires can also increase sediment pollution, by burning away vegetation that kept soil intact.

Climate change

The bad news is that climate change is expected to exacerbate a lot of the problems that already threaten our waterways. It has already lengthened the global fire season, which means more possibilities for a wildfire to burn down trees and brush that help prevent sediment pollution. It is expected to increase the number of extreme precipitation events, which could lead to more instances of combined sewer systems spewing sewage into water bodies. And it’s expected to raise the temperature of the world’s waters, creating the perfect environment for toxic algae to grow.
The good news is that switching to renewable energy could have a doubly-positive impact on drinking water: not only would it keep coal ash and fracking wastewater out of groundwater, but it would slow the impacts of climate change on other parts of the water system, as well.

Gene therapy developed for muscle wasting

Johns Hopkins Medical Institutions,

A discovery by Washington State University scientist Dan Rodgers and collaborator Paul Gregorevic could save millions of people suffering from muscle wasting disease.
The result of the team's four-year project is a novel gene therapeutic approach. The work was published July 20 in Science Translational Medicine, a journal of the American Association for the Advancement of Science.
"Chronic disease affects more than half of the world's population," said Rodgers, professor of animal sciences and director of the Washington Center for Muscle Biology. "Most of those diseases are accompanied by muscle wasting.
"It occurs with chronic infection, muscular dystrophy, malnutrition and old age," he said. "About half the people who die from cancer are actually dying from muscle wasting and there's not one single therapy out there that addresses it.

Family history inspires search for treatment
"I have a strong motivation to do something about this, to do more than simply publish results," said Rodgers, who teamed with Gregorevic of Baker IDI Heart and Diabetes Institute in Australia. "My father died from cachexia," the wasting disease caused by cancer, "and my nephew has Duchenne muscular dystrophy, an incurable, fatal disease that could claim his life in his teens.
"Others have tried and failed to develop treatments for muscle wasting," Rodgers said, "and some drugs have even caused serious safety problems. Our targeted approach only affects muscle and completely avoids these problems, which is why we think we have a solution."
In the paper, lead author Catherine Winbanks, a postdoctoral fellow of Gregorevic, details how researchers built muscle in healthy mice and prevented the loss of skeletal and heart muscle in mice with tumors.

Hormone's muscle-wasting effect blocked
In cachexia, tumors secrete hormones that cause muscle deterioration; in effect, the body eats its own muscles, causing weakness, frailty and fatigue.
"What kills a lot of people isn't the loss of skeletal muscle but heart muscle," said Rodgers. "The heart literally shrinks, causing heart failure."
Researchers have long sought to stop this process, but failed to find a safe way. That's because the hormones that cause wasting -- in particular, a naturally occurring hormone called myostatin -- play important roles elsewhere in the body.
Rodgers and Gregorevic needed a way to stop myostatin, but only in muscles. Their solution: an adeno-associated virus -- a benign virus that specifically targets heart and skeletal muscle.
The virus delivers a small piece of DNA -- a signaling protein called Smad7 -- into muscle cells. Smad7 then blocks two signaling proteins called Smad2 and Smad3, which are activated by myostatin and other muscle-wasting hormones. By blocking those signals, Smad7 stops the breakdown of muscles.
"Smad7 is the body's natural break and, by inhibiting the inhibitor, you build muscle," Rodgers said.
For cachexia patients, such a therapy could massively increase their chances of survival.
"Instead of having one year to fight cancer, you'd have 10 or 15," Rodgers said.
Startup works to develop commercial drug
In 2015, Rodgers launched AAVogen, a company that will develop this discovery into a commercial drug, AVGN7.
He has been working with Norman Ong, a technology licensing associate at WSU's Office of Commercialization, on patents, startup funding and recruitment for AAVogen. Using the funds from WSU's commercial gap fund award, Rodgers' lab will determine the minimum effective dose for AVGN7.
"We want to turn WSU discoveries into real-world uses that benefit the public," said Ong. "Dan is a very busy scientist, so we're proud to help him and AAVogen connect with the right people."
"I formed this company for one purpose: to move the science into society, to see it applied," Rodgers said. "WSU's Office of Commercialization has been instrumental and invaluable to this endeavor.
"Now we have a company with the potential to save a lot of lives," he said.

The hot attraction of gold

Princeton University,

Gold had long been considered a non-magnetic metal. But researchers at Tohoku University recently discovered that gold can in fact be magnetized by applying heat.
They discovered that an electron spin -- the origin of magnetism -- plays an important role in a material's functionality at a non-constant temperature, where the magnetization and the heat flow interact. A subsequent experiment confirmed, for the first time, that magnetization in gold can be induced by the heat flow driven by a temperature change.
The set up of the experiment was quite simple, involving a bilayer comprising Yttrium iron garnet (YIG), which is an insulator magnet, and a thin film of gold. (Fig below).
In the experiment, the temperature of the YIG side and that of the gold side were kept different to drive a heat flow perpendicular to the bilayer. In addition, a magnetic field was applied in parallel to the heat flow. Under this condition, the Hall voltage*1 was measured in the gold film by in-plane electric current. The Hall voltage showed a clear proportional dependence on the applied temperature gradient.
The team interpreted this Hall voltage as evidence of the evolution of magnetization in the thin gold film due to the heat flow. The Hall voltage was named as a non-equilibrium anomalous Hall effect (nAHE).

"We are excited about the potential of this measurement as a standard method of detecting non-equilibrium magnetization because there's no need for complicated processing and technologies as compared to other highly sensitive magnetometry," says researcher Dazhi Hou. "On top of that, we can detect extremely small magnitudes of magnetization in this measurement. Such small magnetizations is the key to revealing as-yet-unknown useful properties of matters."
"These findings are expected to contribute to the development of innovative spintronics in the use of thermoelectric applications, like energy harvesting, adds Professor Eiji Saitoh, who led the research.
Details of this study were published online on July 26, 2016 in Nature Communications.
This research was achieved as part of JST-ERATO "Spin Quantum Rectification Project" led by Professor Eiji Saitoh.

*1 When passing an electrical current on through a conductor and applying a magnetic field perpendicularly to the direction of the current, an electric voltage will be generated in a direction perpendicular to the current and magnetic field.
Story Source:
The above post is reprinted from materials provided by Tohoku University. Note: Materials may be edited for content and length.