first_img Derrick Hall satisfied with D-backs’ buying and selling Former Cardinals kicker Phil Dawson retires Top Stories Grace expects Greinke trade to have emotional impact Comments   Share   Tight endIt was not a strong draft or free-agency period for tight ends. Regardless, coach Bruce Arians loves his wide receivers, and thankfully Arizona has one of the best trios with Larry Fitzgerald, John Brown and Michael Floyd. However, if this offense were missing another dimension, it would be a prolific pass-catching tight end. Troy Niklas, a second-round pick in 2014, has simply not worked out, leaving the team to roll again with Jermaine Gresham and Darren Fells. The two combined for only 39 catches in 2015, but Fells did rank third in DVOA compared to 38th for the slower Gresham.It’s an interesting position to choose because, as we’ve all seen, Arians’ offense does not normally feature the tight end. Even if it did, while the Cardinals don’t have a star at the position, their group that includes a now-healthy Gresham, Fells, Niklas and Ifeanyi Momah, who is recovering from a torn ACL, offers an intriguing mix of size and speed.However, the point seems to be that an impact tight end would make Arizona’s excellent offense even better, and of that there is little doubt. The 5: Takeaways from the Coyotes’ introduction of Alex Meruelo Arizona Cardinals’ Darren Fells reacts during the second half the NFL football NFC Championship game against the Carolina Panthers, Sunday, Jan. 24, 2016, in Charlotte, N.C. (AP Photo/Bob Leverone) A look up and down the Arizona Cardinals’ roster would reveal a pretty loaded team.There are Pro Bowl performers at nearly every position on both offense and defense, along with a good amount of depth, too.However, that does not mean the roster still cannot be improved.Over at ESPN.com, the Football Outsiders folks put together a list of the biggest remaining hole for each of the NFL’s 32 teams, and their choice for the Cardinals may come as a bit of a surprise.last_img read more

first_img H. Akbari et al., doi.org/10.1101/350124 Country * Afghanistan Aland Islands Albania Algeria Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia, Plurinational State of Bonaire, Sint Eustatius and Saba Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, the Democratic Republic of the Cook Islands Costa Rica Cote d’Ivoire Croatia Cuba Curaçao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and McDonald Islands Holy See (Vatican City State) Honduras Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Korea, Democratic People’s Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People’s Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, the former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Martinique Mauritania Mauritius Mayotte Mexico Moldova, Republic of Monaco Mongolia Montenegro Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Norway Oman Pakistan Palestine Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Qatar Reunion Romania Russian Federation Rwanda Saint Barthélemy Saint Helena, Ascension and Tristan da Cunha Saint Kitts and Nevis Saint Lucia Saint Martin (French part) Saint Pierre and Miquelon Saint Vincent and the Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Sint Maarten (Dutch part) Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and the South Sandwich Islands South Sudan Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan Tajikistan Tanzania, United Republic of Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States Uruguay Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Vietnam Virgin Islands, British Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe WENHT/ISTOCK.COM Click to view the privacy policy. Required fields are indicated by an asterisk (*) Sign up for our daily newsletter Get more great content like this delivered right to you! Country For many people who are paralyzed and unable to speak, signals of what they’d like to say hide in their brains. No one has been able to decipher those signals directly. But three research teams recently made progress in turning data from electrodes surgically placed on the brain into computer-generated speech. Using computational models known as neural networks, they reconstructed words and sentences that were, in some cases, intelligible to human listeners.None of the efforts, described in papers in recent months on the preprint server bioRxiv, managed to re-create speech that people had merely imagined. Instead, the researchers monitored parts of the brain as people either read aloud, silently mouthed speech, or listened to recordings. But showing the reconstructed speech is understandable is “definitely exciting,” says Stephanie Martin, a neural engineer at the University of Geneva in Switzerland who was not involved in the new projects.People who have lost the ability to speak after a stroke or disease can use their eyes or make other small movements to control a cursor or select on-screen letters. (Cosmologist Stephen Hawking tensed his cheek to trigger a switch mounted on his glasses.) But if a brain-computer interface could re-create their speech directly, they might regain much more: control over tone and inflection, for example, or the ability to interject in a fast-moving conversation. Email 00:0000:0000:00 Artificial intelligence turns brain activity into speech The hurdles are high. “We are trying to work out the pattern of … neurons that turn on and off at different time points, and infer the speech sound,” says Nima Mesgarani, a computer scientist at Columbia University. “The mapping from one to the other is not very straightforward.” How these signals translate to speech sounds varies from person to person, so computer models must be “trained” on each individual. And the models do best with extremely precise data, which requires opening the skull.Researchers can do such invasive recording only in rare cases. One is during the removal of a brain tumor, when electrical readouts from the exposed brain help surgeons locate and avoid key speech and motor areas. Another is when a person with epilepsy is implanted with electrodes for several days to pinpoint the origin of seizures before surgical treatment. “We have, at maximum, 20 minutes, maybe 30,” for data collection, Martin says. “We’re really, really limited.”The groups behind the new papers made the most of precious data by feeding the information into neural networks, which process complex patterns by passing information through layers of computational “nodes.” The networks learn by adjusting connections between nodes. In the experiments, networks were exposed to recordings of speech that a person produced or heard and data on simultaneous brain activity.Mesgarani’s team relied on data from five people with epilepsy. Their network analyzed recordings from the auditory cortex (which is active during both speech and listening) as those patients heard recordings of stories and people naming digits from zero to nine. The computer then reconstructed spoken numbers from neural data alone; when the computer “spoke” the numbers, a group of listeners named them with 75% accuracy.center_img Epilepsy patients with electrode implants have aided efforts to decipher speech. M. Angrick et al., doi.org/10.1101/478644 Original audio from a study participant, followed by a computer recreation of each word, based on activity in speech planning and motor areas of the brain. By Kelly ServickJan. 2, 2019 , 1:30 PM Another team, led by computer scientist Tanja Schultz at the University Bremen in Germany, relied on data from six people undergoing brain tumor surgery. A microphone captured their voices as they read single-syllable words aloud. Meanwhile, electrodes recorded from the brain’s speech planning areas and motor areas, which send commands to the vocal tract to articulate words. Computer scientists Miguel Angrick and Christian Herff, now with Maastricht University, trained a network that mapped electrode readouts to the audio recordings, and then reconstructed words from previously unseen brain data. According to a computerized scoring system, about 40% of the computer-generated words were understandable. A computer reconstruction based on brain activity recorded while a person listened to spoken digits. Finally, neurosurgeon Edward Chang and his team at the University of California, San Francisco, reconstructed entire sentences from brain activity captured from speech and motor areas while three epilepsy patients read aloud. In an online test, 166 people heard one of the sentences and had to select it from among 10 written choices. Some sentences were correctly identified more than 80% of the time. The researchers also pushed the model further: They used it to re-create sentences from data recorded while people silently mouthed words. That’s an important result, Herff says—”one step closer to the speech prosthesis that we all have in mind.”However, “What we’re really waiting for is how [these methods] are going to do when the patients can’t speak,” says Stephanie Riès, a neuroscientist at San Diego State University in California who studies language production. The brain signals when a person silently “speaks” or “hears” their voice in their head aren’t identical to signals of speech or hearing. Without external sound to match to brain activity, it may be hard for a computer even to sort out where inner speech starts and ends.Decoding imagined speech will require “a huge jump,” says Gerwin Schalk, a neuroengineer at the National Center for Adaptive Neurotechnologies at the New York State Department of Health in Albany. “It’s really unclear how to do that at all.”One approach, Herff says, might be to give feedback to the user of the brain-computer interface: If they can hear the computer’s speech interpretation in real time, they may be able to adjust their thoughts to get the result they want. With enough training of both users and neural networks, brain and computer might meet in the middle.*Clarification, 8 January, 5:50 p.m.: This article has been updated to clarify which researchers worked on one of the projects. 00:0000:0000:00 last_img read more