In his essay “Politics and the English Language” (1946), Orwell wrote about the importance of precise and clear language, arguing that vague writing can be used as a powerful tool of political manipulation. In that essay, Orwell provides six rules for writers:
Never use a metaphor, simile or other figure of speech which you are used to seeing in print.
Never use a long word where a short one will do.
If it is possible to cut a word out, always cut it out.
Never use the passive where you can use the active.
Never use a foreign phrase, a scientific word or a jargon word if you can think of an everyday English equivalent.
Break any of these rules sooner than say anything outright barbarous.[183]
“Machine learning hit the public awareness after spectacular advances in language translation and image recognition. These are typically problems of classification — does a photo show a poodle, a Chihuahua or perhaps just a blueberry muffin? Surprisingly, the latter two look quite similar (E. Togootogtokh and A. Amartuvshin, preprint at https://arxiv.org/abs/1801.09573; 2018). Less widely known is that machine learning for classification has an even longer history in the physical sciences. Recent improvements coming from so-called ‘deep learning’ algorithms and other neural networks have served to make such applications more powerful.”
Read more on The Power of Machine Learning via Nature Physics
“Recently, artificial intelligence and machine-learning algorithms have gained significant attention in the field of osteoporosis [1]. They are recognized for their potential in exploring new research fields, including the investigation of novel risk factors and the prediction of osteoporosis, falls, and fractures by leveraging biological testing, imaging, and clinical data [2]. This new approach might improve the performance of current fracture prediction models by including all possible variables such as the bone mineral density (BMD) of all sites as well as trabecular bone score (TBS) data [3]. Also, the new model could suggest novel factors that could influence the fracture by calculating all variables through a deep learning network. Although there are a few studies in osteoporosis and fracture prediction using machine learning [4–6], a fracture-prediction machine-learning model with a longitudinal, large-sized cohort study including BMD and TBS has not been developed [3].”
Read more on Clinical Applicability of Machine Learning in Family Medicine via Korean J Family Medicine.
“In their new paper ‘Do large language models have a legal duty to tell the truth?‘, published by the Royal Society Open Science, the Oxford researchers set out how LLMs produce responses that are plausible, helpful and confident but contain factual inaccuracies, misleading references and biased information. They term this problematic phenomenon as ‘careless speech’ which they believe causes long-term harms to science, education and society.
Lead author Professor Sandra Wachter, Professor of Technology and Regulation, Oxford Internet Institute explains: ‘LLMs pose a unique risk to science, education, democracy, and society that current legal frameworks did not anticipate. This is what we call ‘careless speech’ or speech that lacks appropriate care for truth. Spreading careless speech causes subtle, immaterial harms that are difficult to measure over time. It leads to the erosion of truth, knowledge and shared history and can have serious consequences for evidence-based policy-making in areas where details and truth matter such as health care, finance, climate change, media, the legal profession, and education. In our new paper, we aim to address this gap by analysing the feasibility of creating a new legal duty requiring LLM providers to create AI models that, put simply, will ‘tell the truth’.”
Read more on Large Language Models pose a risk to society and need tighter regulation via University of Oxford.
“Nearly 500 Quebec doctors have signed an open letter demanding their medical associations denounce the crisis in Gaza and call for an immediate ceasefire and access to humanitarian aid.
‘We, physicians in Quebec, are deeply concerned with the humanitarian catastrophe in Gaza that worsens each day,’ reads the letter, published Thursday morning. ‘One hundred and fifty eight days of devastation, 31,272 killed and 73,024 injured, 1.5 million refugees. Remaining silent in the face of suffering of this magnitude is contrary to our role as physicians and a forsaking of our shared humanity.’
Included among the signatories are Joanne Liu, former international president of Médecins Sans Frontières/Doctors Without Borders and a professor at McGill University’s School of Population and Global Health, and Amir Khadir, former Québec solidaire MNA for the Mercier riding and a specialist in infectious diseases.
The petition is calling on four provincial medical associations — the Collège des médecins du Québec, the Fédération des médecins omnipraticiens du Québec, the Fédération des Médecins spécialistes du Québec, and the Collège québécois des médecins de famille — to issue a statement demanding an immediate ceasefire, immediate access to drinkable water, an end to blockades preventing entry of medical supplies and the release of hostages on both sides of the conflict.
The idea for the open letter originated on Facebook, where some Quebec doctors involved in groups on the social media site voiced the distress they were feeling over the war. Last week, a few started their own Facebook page, titled ‘Quebec doctors against the genocide in Gaza,‘ that quickly drew more than 500 members.”
Read more on “Quebec doctors sign open letter demanding ceasefire in Gaza: Remaining silent in the face of suffering of this magnitude is contrary to our role as physicians” via The Gazette.
Photograph Copyright Ahmad Hasaballah/Getty Images
“With its hairless silicone skin and blue complexion, Emo the robot looks more like a mechanical re-creation of the Blue Man Group than a regular human. Until it smiles.
In a study published March 27 in Science Robotics, researchers detail how they trained Emo to smile in sync with humans. Emo can predict a human smile 839 milliseconds before it happens and smile back.
Right now, in most humanoid robots, there’s a noticeable delay before they can smile back at a person, often because the robots are imitating a person’s face in real time. ‘I think a lot of people actually interacting with a social robot for the first time are disappointed by how limited it is,’ says Chaona Chen, a human-robot interaction researcher at the University of Glasgow in Scotland. ‘Improving robots’ expression in real time is important.'”
This robot can tell when you’re about to smile — and smile back via Science News.
Art featured: Gustave Courbet, The Desperate Man, 1843–45. Image via Wikimedia Commons
“The human brain is home to around 86 billion neurons, nerve cells connected to one another by synapses.
Every time we want to move, feel or think, a tiny electrical impulse is generated and sent incredibly quickly from one neuron to another.
Scientists have developed devices which can detect some of those signals – either using a non-invasive cap placed on the head or wires implanted into the brain itself.
The technology – known as a brain-computer interface (BCI) – is where many millions of dollars of research funding appears to be heading at the moment.”
Learn more on Neuralink: Musk’s firm says first brain-chip patient plays online chess via BBC.
“In this issue of the World Happiness Report we focus on the happiness of people at different stages of life. In the seven ages of man in Shakespeare’s As You Like It, the later stages of life are portrayed as deeply depressing. But happiness research shows a more nuanced picture, and one that is changing over time. We encourage you to explore the 2024 report for the latest findings on the happiness of the world’s young, the old – and everyone in between.”
Learn more on the World Happiness Report via Oxford University.
“So there is no shame in feeling lonely even though society often tells us that we’ve done something wrong or you know if we somehow find ourselves alone on a Friday night or if we feel lonely on the playground you know, or in the cafeteria at school. Perhaps most insidious but most harmful is the impact loneliness has on our sense of self. Over time we come to believe when we’re lonely that we’re lonely because we’re not likable which makes it harder to take a risk and a chance in conversation. So in that way loneliness can be a downward spiral and part of the challenge and the mission to build a more connected society and a more connected life, is figuring out how do we break that downward spiral so that we can once again rebuild connection which is what we’re naturally called to do.” Vivek Murthy on Loneliness and the Power of Connection