Email Spotify Captures Half Of British Teens’ Streaming Hours U.K. Teens Prefer Streaming Over Radio spotify-captures-half-british-teens-streaming-hours Research reveals young U.K. music listeners rely on playlists, Spotify and YouTube more than radioPhilip MerrillGRAMMYs Sep 6, 2017 – 4:50 pm Research firm AudienceNet revealed fascinating findings from a June 2017 survey of music consumption in the U.K., showing radio still has a place in young people’s lives but streaming playlists, Spotify and YouTube have replaced passive listening with an on-demand culture giving listeners the power to navigate.The “Audiomonitor” survey shows the 16–24 age group has diverged from the UK’s national average, signaling a new stage as streaming services become dominant with youth. This divergence towards on-demand listening is strongest among 16–19 year-olds.Albums versus playlists: Brits over 65 spend 41 percent of their time listening to albums, but time spent listening tilts in favor of playlists among 16–24 year-olds. The 16–19 segment spent only 20 percent of their time on albums and 35 percent of their time listening to playlists.Radio: The average British listener spends 43 percent of their time listening to radio and three-quarters of 16–24 year-olds listen to some radio each week. But among 16–19 year-olds radio listening has dropped down to just 10 percent of their time.Streaming: While only 24 percent of UK listeners use streaming, in general, this rate rises to 62 percent among the 16–19 age group.YouTube versus Spotify: YouTube has a 31 percent weekly reach on average, almost double Spotify’s 16 percent. But among the streaming hours of the 16–19 demographic, Spotify dominates with 51 percent of their streaming time while YouTube accounts for 33 percent. AudienceNet estimates that Spotify accounts for a total of 30 percent of 16–19 year-olds’ listening hours.More changes are inevitable as streaming audiences supply real-time data detailing their listening habits to music services, artists, and across social media.Spotify For Artists Promises Closer Connection To Fan BasesRead more Twitter News Facebook
Tata Global Beverages on Wednesday said it might sell its loss-making China subsidiary. The company’s Chinese subsidiary, called Zhejiang Tata Tea Extraction Company, reported a net loss of Rs. 15 crore in the previous financial year. “For China, we are exploring different options, which could be restructuring or a sale,” Cyrus Mistry, the Tata Group Chairman, said at Tata Global Beverage’s (TGB) annual shareholder meeting in Kolkata on Wednesday.The Indian non-alcoholic beverage company holds 81 percent in Zhejiang Tata Tea Extraction (ZTTECL). According to TGBL’s balance sheet, the Chinese subsidiary reported losses of over Rs. 100 crore in the financial year that ended on Dec. 31 Dec, 2015.”We continue to have those challenges and will take a call on the business there in the coming year. It is significantly a B2B business over there. It is not a consumer business. That’s not the reason that business is getting impacted. The reason is more from a productive prospective,” Mistry added. Various options have been considered for “restructuring” the Chinese business. The joint venture recorded liabilities of Rs. 115.54 crore and loss after tax was pegged at Rs. 15.34 crore as of December 31, 2015, according to TGBL’s annual report. “Delays continue in stabilisation of the China business. While prospective customers have shown interest in our instant tea products, the final conversion to orders will be dependent on meeting the product profile requirements. Going forward, stabilising the production process and establishing a pipeline of external customers and successful scaling of technology will be key to the success of the project,” the annual report said.The joint venture between Zhejiang Tata Tea Extraction and TGBL was inked in 2007. TGBL holds 70 percent stake while ZTTECL holds the rest. The TGBL stock was trading at Rs. 141.75 at around 2.31 p.m. on Thursday, up 1 percent from its previous close on the Bombay Stock Exchange.
internet People have been experiencing slow internet since Monday morning as mobile phone operators have been asked to slow down internet speed in the morning for 150 minutes to check question paper leaks during the SSC examinations.The Bangladesh Telecommunication Regulatory Commission (BTRC) has instructed the operators to slow down internet speed from 8:00am to 10:30am from 12 to 24 February, says UNB.The instruction was given in a bid to prevent question paper leakage during the ongoing the Secondary School Certificate examinations, according to the BTRC.The internet was slowed down for 30 minutes from 10:00pm on Sunday as trial run following the directive of BTRC.Although the government took some measures to stop question paper leaks, the SSC question papers are being leaked out from the very first day of the examination.
BNP chairperson Khaleda ZiaAnti-Corruption Commission (ACC) on Sunday filed an appeal seeking tougher punishment for BNP chairperson Khaleda Zia in Zia Orphanage Trust graft case in which she was earlier sentenced to five years’ imprisonment. ACC lawyers submitted the appeal to the office concerned of the High Court in the morning. reports UNB.“We have completed the affidavit of the appeal seeking harsher punishment for Khaleda Zia in the Zia Orphanage Trust Graft case,” said Khurshid Alam Khan, one of lawyers for the ACC.On 19 March, the Appellate Division of the Supreme Court stayed BNP chairperson Khaleda Zia’s bail till 8 May in the graft case.The SC also allowed the state and the ACC to file petitions against HC order that granted bail to Khaleda fixing 8 May for next hearing.It also ordered the ACC and state to submit the concise statement of appeals within two weeks and BNP within four weeks.On 12 March, the High Court granted a four-month interim bail to Khaleda Zia.On 8 February last, the Dhaka Special Court-5 convicted the BNP chief and sentenced her to five years’ imprisonment in the case.Khaleda was then sent to old central jail at Nazimuddin Road in the city.The court read out a 632-page summarised version of the verdict on that day and it released the full 1174-page copy of the verdict on 19 February.
A motorcyclist was killed as a covered van hit his vehicle in Mansurabad area of Chattogram city early Sunday.The deceased is Md Mirza, reports UNB.The accident took place when the van hit the motorcycle in front of the regional passport office around 2:00am, leaving him critically injured, said Chittagong Medical College Hospital (CMCH) police outpost constable Amir Hossain.He was brought to the hospital where the duty doctors declared him dead, the constable added.
Award winning director Lisa Sabina Harney is screening her drama documentary Satyagraha-Truth Force in the Capital.Presented by Goddess Films in association with Harvard Club Of India this drama documentary tells the story of a group of humble Indian saints who believe the sanctity of their holy river – The Ganges – is being destroyed by corruption and a powerful mining lobby.They have been threatened, beaten, jailed and bribed. Two have died. Both were murdered, or so they believe. Also Read – ‘Playing Jojo was emotionally exhausting’Satyagraha – Truth Force follows the Satyagraha or hunger strike of Swami Shivanand as he fights with his life to protect the river and attempts to find justice for his disciple whom he believes was poisoned.Harney has spent 15 years working intensively as a writer, producer and director of award winning documentaries and docu-drama. She is the recipient of a Golden Eagle and Hugo Award and has been interviewed and published by the Guerrilla Filmmakers guide as an expert in dramatized documentary.The 91-minute movie will be screened in presence of Yogesh J Karan, High Commissioner, Republic of Fiji and will be followed by a discussion about the same.When: 9th January, 6 pm onwardsWhere: Alliance Française de Delhi
Delhi Metro on Friday decided to install solar power panels on the foot-overbridges at the Faridabad corridor stations (Sarai to Escorts Mujesar), which will produce solar power of 225 KW.“
Drivendata has come out with a new tool, named, Deon, which allows you to easily add an ethics checklist to your data science projects. Deon is aimed at pushing the conversation about ethics in data science, machine learning, and Artificial intelligence by providing actionable reminders to data scientists. According to the Deon team, “it’s not up to data scientists alone to decide what the ethical course of action is. This has always been a responsibility of organizations that are part of civil society. This checklist is designed to provoke conversations around issues where data scientists have particular responsibility and perspective”. Deon comes with a default checklist, but you can also develop your own custom checklists by removing items and sections, or marking items as N/A depending on the needs of the project. There are also real-world examples linked with each item in the default checklist. To be able to run Deon for your data science projects, you need to have Python 3 or greater. Let’s now discuss the two types of checklists, Default, and Custom, that comes with Deon. Default checklist The default checklist comprises of sections on Data Collection, Data Storage, Analysis, Modeling, and Deployment. Data Collection This checklist covers information on informed consent, Collection Bias, and Limit PII exposure. Informed consent includes a mechanism for gathering consent where users have clear understanding of what they are consenting to. Collection Bias checks on sources of bias introduced during data collection and survey design. Lastly, Limit PII exposure talks about ways that can help minimize the exposure of personally identifiable information (PII). Data Storage This checklist covers sections such as Data security, Right to be forgotten and Data retention plan. Data Security refers to a plan to protect and secure data. Right to be forgotten includes a mechanism by which an individual can have his/her personal information. Data Retention consists of a plan to delete the data if no longer needed. Analysis This section comprises information on Missing perspectives, Dataset bias, Honest representation, Privacy in analysis and Auditability. Missing perspectives address the blind spots in data analysis via engagement with relevant stakeholders. Dataset bias discusses examining the data for possible sources of bias and consists of steps to mitigate or address them. Honest representation checks if visualizations, summary statistics, and reports designed honestly represent the underlying data. Privacy in analysis ensures that the data with PII are not used or displayed unless necessary for the analysis. Auditability refers to the process of producing an analysis which is well documented and reproducible. Modeling This offers information on Proxy discrimination, Fairness across groups, Metric selection, Explainability, and Communicate bias. Proxy discrimination talks about ensuring that the model does not rely on variables or proxies that are discriminatory. Fairness across groups is a section that cross-checks whether the model results have been tested for fairness with respect to different affected groups. Metric selection considers the effects of optimizing for defined metrics and other additional metrics. Explainability talks about explaining the model’s decision in understandable terms. Communicate bias makes sure that the shortcomings, limitations, and biases of the model have been properly communicated to relevant stakeholders. Deployment This covers topics such as Redress, Roll back, Concept drift, and Unintended use. Redress discusses with an organization a plan for response in case users get harmed by the results. Roll back talks about a way to turn off or roll back the model in production when required. Concept drift refers to changing relationships between input and output data in a problem over time. This part in a checklist reminds the user to test and monitor the concept drift. This is to ensure that the model remains fair over time. Unintended use prompts the user about the steps to be taken for identifying and preventing uses and abuse of the model. Custom checklists For your projects with particular concerns, it is recommended to create your own checklist.yml file. Custom checklists are required to follow the same schema as checklist.yml. Custom Checklists need to have a top-level title which is a string, and sections which are a list. Each section in the list must have a title, a section_id, and then a list of lines. Each line must include a line_id, a line_summary, and a line string which is the content. When changing the default checklist, it is necessary to keep in mind that Deon’s goal is to have checklist items that are actionable. This is why users are advised to avoid suggesting items that are vague (e.g., “do no harm”) or extremely specific (e.g., “remove social security numbers from data”). For more information, be sure to check out the official Drivendata blog post. Read Next The Cambridge Analytica scandal and ethics in data science OpenAI charter puts safety, standards, and transparency first 20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017
Last week, the team at Facebook AI Research announced that they are open sourcing PyText NLP framework. PyText, a deep-learning based NLP modeling framework, is built on PyTorch. Facebook is outsourcing some of the conversational AI techs for powering the Portal video chat display and M suggestions on Facebook Messenger. How is PyText useful for Facebook The PyText framework is used for tasks like document classification, semantic parsing, sequence tagging and multitask modeling. This framework easily fits into research and production workflows and emphasizes on robustness and low-latency to meet Facebook’s real-time NLP needs. PyText is also responsible for models powering more than a billion daily predictions at Facebook. This framework addresses the conflicting requirements of enabling rapid experimentation and serving models at scale by providing simple interfaces and abstractions for model components. It uses PyTorch’s capabilities of exporting models for inference through optimized Caffe2 execution engine. Features of PyText PyText features production-ready models for various NLP/NLU tasks such as text classifiers, sequence taggers, etc. PyText comes with a distributed-training support, built on the new C10d backend in PyTorch 1.0. It comes with training support and also features extensible components that help in creating new models and tasks. The framework’s modularity, allows it to create new pipelines from scratch and modify the existing workflows. It comes with a simplified workflow for faster experimentation. It gives an access to a rich set of prebuilt model architectures for text processing and vocabulary management. Serve as an end-to-end platform for developers. Its modular structure helps engineers to incorporate individual components into existing systems. Added support for string tensors to work efficiently with text in both training and inference. PyText for NLP development PyText improves the workflow for NLP and supports distributed training for speeding up NLP experiments that require multiple runs. Easily portable The PyText models can be easily shared across different organizations in the AI community. Prebuilt models With a model focused on NLP tasks, such as text classification, word tagging, semantic parsing, and language modeling, this framework makes it possible to use pre-built models on new data, easily. Contextual models For improving the conversational understanding in various NLP tasks, PyText uses the contextual information, such as an earlier part of a conversation thread. There are two contextual models in PyText, a SeqNN model for intent labeling tasks and a Contextual Intent Slot model for joint training on both tasks. PyText exports models to Caffe2 PyText uses PyTorch 1.0’s capability to export models for inference through the optimized Caffe2 execution engine. Native PyTorch models require Python runtime, which is not scalable because of the multithreading limitations of Python’s Global Interpreter Lock. Exporting to Caffe2 provides efficient multithreaded C++ backend for serving huge volumes of traffic efficiently. PyText’s capabilities to test new state-of-the-art models will be improved further in the next release. Since, putting sophisticated NLP models on mobile devices is a big challenge, the team at Facebook AI research will work towards building an end-to-end workflow for on-device models. The team plans to include supporting multilingual modeling and other modeling capabilities. They also plan to make models easier to debug, and might also add further optimizations for distributed training. “PyText has been a collaborative effort across Facebook AI, including researchers and engineers focused on NLP and conversational AI, and we look forward to working together to enhance its capabilities,” said the Facebook AI research team. Users are excited about this news and want to explore more. To know about this in detail, check out the release notes on GitHub. Read Next Facebook contributes to MLPerf and open sources Mask R-CNN2Go, its CV framework for embedded and mobile devices Facebook retires its open source contribution to Nuclide, Atom IDE, and other associated repos Australia’s ACCC publishes a preliminary report recommending Google Facebook be regulated and monitored for discriminatory and anti-competitive behavior