BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Research on Research - ECPv6.9.1//NONSGML v1.0//EN CALSCALE:GREGORIAN METHOD:PUBLISH X-ORIGINAL-URL:https://researchonresearch.org X-WR-CALDESC:Events for Research on Research REFRESH-INTERVAL;VALUE=DURATION:PT1H X-Robots-Tag:noindex X-PUBLISHED-TTL:PT1H BEGIN:VTIMEZONE TZID:UTC BEGIN:STANDARD TZOFFSETFROM:+0000 TZOFFSETTO:+0000 TZNAME:UTC DTSTART:20220101T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTART;TZID=UTC:20230615T180000 DTEND;TZID=UTC:20230615T193000 DTSTAMP:20250708T064222 CREATED:20250128T110759Z LAST-MODIFIED:20250128T110759Z UID:2141-1686852000-1686857400@researchonresearch.org SUMMARY:Can AI predict research impacts? DESCRIPTION:The success or failure of medical research is judged by patient outcomes far downstream of the strategic decisions that initiate it. Optimising translational impact therefore relies on long range forecasting\, for which no established framework exists. The evaluation of research proposals by expert appraisal of their content is undermined by difficulties with scaling\, reproducibility\, generalisability\, and bias. Evaluation by summary bibliometrics of public reception offers greater objectivity but doubtful fidelity. Both approaches favour the familiar\, the conventional\, the plausible\, and the incremental; and oppose the unusual\, the unorthodox\, the counter-intuitive\, and the disruptive: rare characteristics on which translational success increasingly depends.  \n\n\n\n\n\n\n\n\n\nIn this talk\, Amy Nelson and Parashkev Nachev (UCL) advocate for a third way\, founded on richly expressive models of research content\, that seeks to combine the finesse of a human expert with the rigour of a machine. They argue such models can successfully capture regularities too intricate to be either intuitively apprehensible or reducible to summary metrics\, thereby illuminating complex characteristics of translational success in which testable hypotheses about optimal research strategy may be grounded.  \n\n\n\nThey describe a proof-of-concept analysis of the comparative predictability of future real-world translation—as indexed by inclusion in patents\, guidelines\, or policy documents—from complex models of title/abstract-level published research content versus citations and metadata alone. Quantifying predictive performance out-of-sample\, ahead of time\, across major domains\, using the entire corpus of biomedical research captured by Microsoft Academic Graph from 1990–2019\, encompassing 43.3 million papers\, they show that high-dimensional models of titles\, abstracts\, and metadata exhibit substantially higher fidelity (AUC > 0.9) than simple models\, generalise across time and domain\, and transfer to recognising the papers of Nobel laureates. Their talk will build on this recent paper in Patterns. \n\n\n\nThe Speakers\n\n\n\nAmy Nelson is a Senior Research Associate in the High Dimensional Neurology Group at UCL Queen Square Institute of Neurology\, Research Impact Fellow at the NIHR UCLH Biomedical Research Centre\, and a junior doctor. Dr Nelson builds AI models for clinical\, operational and research impact objectives across computer vision\, deep representation learning\, and natural language processing domains. \n\n\n\nParashkev Nachev is a Professor of Neurology at the UCL Institute of Neurology\, and Honorary Consultant Neurologist at the National Hospital for Neurology and Neurosurgery\, Queen Square. His High-Dimensional Neurology Group develops novel computational methods for drawing representational\, predictive\, and prescriptive intelligence from rich data. URL:https://researchonresearch.org/event/can-ai-predict-research-impacts/ CATEGORIES:Online,Seminar,Ai ATTACH;FMTTYPE=image/jpeg:https://researchonresearch.org/wp-content/uploads/2023/09/artificial-intelligence-ai-and-machine-learning-2023-05-21-04-29-23-utc-scaled-e1737735189337.jpg END:VEVENT BEGIN:VEVENT DTSTART;TZID=UTC:20221212T080000 DTEND;TZID=UTC:20221212T170000 DTSTAMP:20250708T064222 CREATED:20250128T110800Z LAST-MODIFIED:20250128T110800Z UID:2143-1670832000-1670864400@researchonresearch.org SUMMARY:Machine learning\, metrics & merit: the future of research assessment DESCRIPTION:The use of quantitative indicators and metrics in research assessment continues to generate a mix of enthusiasm\, hostility and critique. To these possibilities\, we can add growing interest in uses of machine learning and artificial intelligence (AI) to automate assessment processes\, and reduce the cost and bureaucracy of conventional methods of peer and panel-based review. \n\n\n\n\n\n\n\n\n\nNovel methods also bring potential pitfalls\, uncertainties and dilemmas\, and may operate in some tension with moves towards responsible research assessment\, as reflected in the Declaration on Research Assessment (DORA) and the new Coalition for Advancing Research Assessment (CoARA). \n\n\n\nAs the UK again reviews its approach to research assessment and the design of the Research Excellence Framework (REF)\, these and other issues are up for discussion through the Future Research Assessment Programme (FRAP)\, initiated by the four UK higher education funding bodies. \n\n\n\nThis workshop launches two new studies that should make significant contributions to the FRAP process. \n\n\n\nThe first\, led by Professor Mike Thelwall\, is a ground-breaking analysis of whether one could run a REF exercise using AI. The second is an updated review of the role of metrics in the UK research assessment system\, which builds on the 2015 review\,The Metric Tide\, which called for responsible approaches to the use of metrics\, and cautioned against purely metric-based approaches to assessment. For more on these studies\, see recent articles in Nature\, Research Professional and Times Higher Education. \n\n\n\nWe were joined by Professor Dame Jessica Corner\, new Executive Chair of Research England who offered opening keynote remarks\, and by two panels of UK and international experts. URL:https://researchonresearch.org/event/machine-learning-metrics-merit-the-future-of-research-assessment/ CATEGORIES:Seminar,Research Evaluation ATTACH;FMTTYPE=image/jpeg:https://researchonresearch.org/wp-content/uploads/2024/03/tide-ocean-waves-beach-scaled-e1737735101368.jpeg END:VEVENT BEGIN:VEVENT DTSTART;TZID=UTC:20220721T160000 DTEND;TZID=UTC:20220721T170000 DTSTAMP:20250708T064222 CREATED:20250128T110801Z LAST-MODIFIED:20250128T110801Z UID:2144-1658419200-1658422800@researchonresearch.org SUMMARY:When priorities don't align with needs: the case of mental health research DESCRIPTION:Mental ill-health and well-being are increasingly recognised as being intimately linked to a wide range of environmental and social factors. As such\, the ways in which researchers approach\, understand\, and engage with mental health must be broad\, ranging from the biophysiological mechanisms underpinning brain function\, to the societal determinants which alter it. \n\n\n\n\n\n\n\n\n\nThe significance of this connection has been illustrated by the effects of COVID lockdowns on mental health in which: fear\, sudden changes in daily habits\, family roles\, domestic violence\, work burnout\, etc. have all palpably impinged on mental well-being. \n\n\n\nIn this seminar\, Ismael Rafols\, senior researcher at the Centre for Science and Technology studies (CWTS\, Leiden University) and associate faculty at SPRU (Science Policy Research Unit) at the University of Sussex\, presents a recent study\, based on a collaboration between Vinnova and CWTS. \n\n\n\nThis contrasts current research priorities with societal demands through the analysis of publication specialisation of countries\, funders and organisations\, shown in open interactive visualisations. The results suggest a need to diversify mental health research towards more socially engaged approaches. \n\n\n\nSara Nässtrom of Vinnova\, the Swedish Innovation Agency\, who represents Vinnova in Sweden’s National Strategy for Mental Health\, offers her response. \n\n\n\nThis event was part of  RoRI ‘s seminar series on the theme of Culture Shift\, where we aim to spotlight some of the most exciting thinkers\, practitioners and research system entrepreneurs who are at the forefront of analysing\, pioneering and propelling culture shifts across science and research. URL:https://researchonresearch.org/event/when-priorities-dont-align-with-needs-the-case-of-mental-health-research/ CATEGORIES:Seminar ATTACH;FMTTYPE=image/jpeg:https://researchonresearch.org/wp-content/uploads/2024/03/puzzle-wooden-colourful-shapes-scaled-e1737735055172.jpeg END:VEVENT BEGIN:VEVENT DTSTART;TZID=UTC:20220616T153000 DTEND;TZID=UTC:20220616T163000 DTSTAMP:20250708T064222 CREATED:20250128T110801Z LAST-MODIFIED:20250128T110801Z UID:2145-1655393400-1655397000@researchonresearch.org SUMMARY:The Quantified Scholar DESCRIPTION:Around the world\, the good\, the bad and the ugly in research cultures are the focus of unprecedented scrutiny and debate. Imperatives of equality\, diversity\, inclusion\, impact\, integrity and sustainability are forcing overdue change to institutions\, policies and practices. But there is still a long way to go. \n\n\n\n\n\n\n\n\n\nJuan Pablo Pardo-Guerra\, associate professor of sociology at the University of California\, San Diego and author of the book The Quantified Scholar\, explores how processes of research evaluation themselves shape disciplines\, promote conformity and limit diversity. \n\n\n\nProf. Sarah de Rijcke\, Co-Chair of RoRI and Scientific Director at the Centre for Science and Technology Studies (CWTS)\, Leiden University and Dr Molly Morgan Jones\, Director of Policy at The British Academy\, offer their responses. \n\n\n\nThis seminar was organised by RoRI and Sheffield Metascience Network (MetaNet) at the University of Sheffield. URL:https://researchonresearch.org/event/the-quantified-scholar/ CATEGORIES:Online,Seminar,Research Evaluation ATTACH;FMTTYPE=image/jpeg:https://researchonresearch.org/wp-content/uploads/2024/03/stack-of-books-on-a-chair-e1737735006476.jpg END:VEVENT END:VCALENDAR