bolha.us is one of the many independent Mastodon servers you can use to participate in the fediverse.
We're a Brazilian IT Community. We love IT/DevOps/Cloud, but we also love to talk about life, the universe, and more. | Nós somos uma comunidade de TI Brasileira, gostamos de Dev/DevOps/Cloud e mais!

Server stats:

252
active users

#preprint

0 posts0 participants0 posts today

#Preprint sites #bioRxiv and #medRxiv launch new era of independence
The popular repositories, where life #scientists post research before #peerreview, will be managed by a new organization called #openRxiv.
Until now, they had been managed by Cold Spring Harbor Laboratory. The new organization, named openRxiv, will have a board of directors and a scientific and medical advisory board. It is supported by a fresh US$16M grant from Chan Zuckerberg Initiative (CZI).
nature.com/articles/d41586-025

www.nature.comPreprint sites bioRxiv and medRxiv launch new era of independenceThe popular repositories, where life scientists post research before peer review, will be managed by a new organization called openRxiv.

🚨 New preprint 🚨

Hydrology and cave (and cave hydrology!) enthusiasts may enjoy this preprint just posted today for community review in the #EGU journal #HESS. Led by former #UNSW student, Christina Song, with @Andbaker and myself, we looked at recharge thresholds (amount of precipitation needed for recharge to occur in a cave), and how they changed after a fire.

egusphere.copernicus.org/prepr

The preprint is open now for community discussion, and will be accepting comments until 23 April.

egusphere.copernicus.orgRainfall recharge thresholds decrease after an intense fire over a near-surface cave at Wombeyan, AustraliaAbstract. Quantifying the amount of rainfall needed to generate groundwater recharge is important for the sustainable management of groundwater resources. Here, we quantify rainfall recharge thresholds using drip loggers situated in a near-surface cave: Wildman’s cave at Wombeyan, southeast Australia. In just over two years of monitoring, 42 potential recharge events were identified in the cave, approximately 4 m below land surface which comprises a 30° slope with 37 % bare rock. Recharge events occurred within 48 hours of rainfall. Using daily precipitation data, the median 48 h rainfall needed to generate recharge was 19.8 mm, without clear seasonal variability. An intense experimental fire experiment was conducted 18 months into the monitoring period: the median 48 h rainfall needed to generate recharge was 22.1 mm before the fire (n=22) and 16.4 mm after the fire (n=20), with the decrease in rainfall recharge most noticeable starting three months after the fire.. Rainfall recharge thresholds and number of potential recharge events at Wildman’s Cave are consistent with those published from other caves in water-limited Australia. At Wildman’s Cave, we infer that soil water storage, combined with the generation of overland flow over bare limestone surfaces is the pathway for water movement to the subsurface via fractures and that these determine the rainfall recharge threshold. Immediately after the fire, surface ash deposits initially retard overland flow, and after ash removal from the land surface, soil loss and damage decrease the available soil water storage capacity, leading to more efficient infiltration and a decreased rainfall recharge threshold.

New #preprint 📢 - Can #OpenAlex compete with #Scopus in bibliometric analysis?

👉 arxiv.org/abs/2502.18427

@OpenAlex has broader coverage and shows higher correlation with certain expert assessments.

At the same time, it has issues with metadata completeness and document classification.

❗ Most intriguingly: it turns out that raw #citation counts perform just as well, and in some cases even better, than normalized indicators, which have long been considered the standard in #scientometrics.

arXiv.orgIs OpenAlex Suitable for Research Quality Evaluation and Which Citation Indicator is Best?This article compares (1) citation analysis with OpenAlex and Scopus, testing their citation counts, document type/coverage and subject classifications and (2) three citation-based indicators: raw counts, (field and year) Normalised Citation Scores (NCS) and Normalised Log-transformed Citation Scores (NLCS). Methods (1&2): The indicators calculated from 28.6 million articles were compared through 8,704 correlations on two gold standards for 97,816 UK Research Excellence Framework (REF) 2021 articles. The primary gold standard is ChatGPT scores, and the secondary is the average REF2021 expert review score for the department submitting the article. Results: (1) OpenAlex provides better citation counts than Scopus and its inclusive document classification/scope does not seem to cause substantial field normalisation problems. The broadest OpenAlex classification scheme provides the best indicators. (2) Counterintuitively, raw citation counts are at least as good as nearly all field normalised indicators, and better for single years, and NCS is better than NLCS. (1&2) There are substantial field differences. Thus, (1) OpenAlex is suitable for citation analysis in most fields and (2) the major citation-based indicators seem to work counterintuitively compared to quality judgements. Field normalisation seems ineffective because more cited fields tend to produce higher quality work, affecting interdisciplinary research or within-field topic differences.

After collaborating with Jess Deighton, Neil Humphrey and many others since early 2017, the results of the "Education for Wellbeing" programme are out.

You can find a top-level summary here:
annafreud.org/research/current

You can find the DFE briefings and technical report here:
gov.uk/government/publications

And you find #PrePrint papers for impact, implementation, qualitative & economics here:
osf.io/kxug7/

#RCT#CACE#AnnaFreud

Haustein et al (2024) Estimating global article processing charges paid to six publishers for open access between 2019 and 2023

(SPOILER: the authors calculated a total sum of almost 9 bn US$ in their sample)

arxiv.org/abs/2407.16551

#preprint #openaccess
#academicpublishing

arXiv.orgEstimating global article processing charges paid to six publishers for open access between 2019 and 2023This study presents estimates of the global expenditure on article processing charges (APCs) paid to six publishers for open access between 2019 and 2023. APCs are fees charged for publishing in some fully open access journals (gold) and in subscription journals to make individual articles open access (hybrid). There is currently no way to systematically track institutional, national or global expenses for open access publishing due to a lack of transparency in APC prices, what articles they are paid for, or who pays them. We therefore curated and used an open dataset of annual APC list prices from Elsevier, Frontiers, MDPI, PLOS, Springer Nature, and Wiley in combination with the number of open access articles from these publishers indexed by OpenAlex to estimate that, globally, a total of \$8.349 billion (\$8.968 billion in 2023 US dollars) were spent on APCs between 2019 and 2023. We estimate that in 2023 MDPI (\$681.6 million), Elsevier (\$582.8 million) and Springer Nature (\$546.6) generated the most revenue with APCs. After adjusting for inflation, we also show that annual spending almost tripled from \$910.3 million in 2019 to \$2.538 billion in 2023, that hybrid exceed gold fees, and that the median APCs paid are higher than the median listed fees for both gold and hybrid. Our approach addresses major limitations in previous efforts to estimate APCs paid and offers much needed insight into an otherwise opaque aspect of the business of scholarly publishing. We call upon publishers to be more transparent about OA fees.

New #preprint! My student Leya Lopez and I have developed a framework for analyzing pump-probe spectroscopy measurements when the pump-probe response depends nonlinearly on the incident intensity.

We solve for the optical coefficients for different types of response, and get solutions in terms of Bessel functions, hypergeometric functions, and Heun functions, for those of you who are into that kind of thing. I had never heard of Heun functions before this.

arxiv.org/abs/2410.21496

arXiv.orgNonlinear photoconductivity in pump-probe spectroscopy. I. Optical coefficientsWe analyze the optical pump-probe reflection and transmission coefficients when the photoinduced response depends nonlinearly on the incident pump intensity. Under these conditions, we expect the photoconductivity depth profile to change shape as a function of the incident fluence, unlike the case when the photoinduced response is linear in the incident intensity. We consider common optical nonlinearities, including photoconductivity saturation and two-photon absorption, and we derive analytic expressions for the photoconductivity depth profile when one or more is present. We review the theory of the electromagnetic transmission and reflection coefficients in a stratified medium, and we derive general expressions for these coefficients for a medium with an arbitrary photoconductivity depth profile. For several photoconductivity profiles of importance in pump-probe spectroscopy, we show that the wave equation can be transformed into one of three standard differential equations$\unicode{x2014}$the Bessel equation, the hypergeometric equation, and the Heun equation$\unicode{x2014}$with analytic solutions in terms of their associated special functions. From these solutions, we derive exact analytic expressions for the optical coefficients in terms of the photoconductivity at the optical interface, and we discuss their limiting forms in various physical limits. Our results provide a systematic guide for analyzing pump-probe measurements over a wide range of pump intensities, and establishes a framework for constraining the systematic uncertainty associated with nonlinear photoconductivity profile distortion.

Our new #preprint changes the way we look at an “extinction vortex” in which a small population loses fitness, causing it to become even smaller biorxiv.org/content/10.1101/20.
#MutationalMeltdown #EffectivePopulationSize #EvolutionaryRescue #PopulationGenetics #EvolGenPaper @wmawass @jdmatheson @uliseshmc 1/7

bioRxiv · Extinction vortices are driven more by a shortage of beneficial mutations than by deleterious mutation accumulationNatural populations are increasingly at risk of extinction due to climate change and habitat loss and fragmentation. The long-term viability of small populations can be threatened by an extinction vortex a positive feedback loop between declining fitness and declining population size. Two distinct mechanisms can drive an extinction vortex: i) ineffective selection in small populations allows deleterious mutations to fix, driving mutational meltdown, and ii) fewer individuals produce fewer of the novel beneficial mutations essential for long-term adaptation, a mechanism we term mutational drought. We measure the relative importance of each mechanism, on the basis of how sensitive beneficial vs. deleterious components of fitness flux are to changes in census population size near the critical population size at which fitness is stable. We derive analytical results given linkage equilibrium, complemented by simulations that capture the complex linkage disequilibria that emerge under high deleterious mutation rates. Even in the absence of environmental change, mutational drought can be nearly as important as mutational meltdown. Real populations must also adapt to a changing environment, making mutational drought more important. A partial exception is that mutational drought is somewhat less important when the beneficial mutation rate is high, although its contribution remains substantial. The critical population size is driven substantially higher by linkage disequilibria between deleterious and beneficial mutations, which also increase (albeit modestly) the importance of mutational drought.

🚨 New #preprint online! 🚨

From friend to foe and back - Coevolutionary transitions in the mutualism-antagonism continuum

1st author Felix Jäger studied the dynamic nature of biotic interactions and identified an evolutionary #TippingPoint: a gradual change in environmental conditions may lead to an abrupt breakdown of #mutualism to #antagonism, which can‘t be reversed by restoring the initial conditions. 🤯

#TheoreticalEcology
#EcologicalModelling

doi.org/10.1101/2024.09.27.615

bioRxiv · From friend to foe and back - Coevolutionary transitions in the mutualism-antagonism continuumInterspecific interactions evolve along a continuum ranging from mutualism to antagonism. Evolutionary theory so far focused mostly on parts of this continuum, notably on mechanisms that enable and stabilise mutualism. These mechanisms often involve partner discrimination ensuring that interaction intensity is higher with more cooperative partners. However, the gradual trajectory of coevolutionary transitions between mutualism and antagonism remains unclear. Here, we model how discrimination ability in one partner coevolves with cooperativeness in the other and analyse the resulting evolutionary trajectories in the mutualism-antagonism continuum. We show that strong ecological change, such as a radical host shift or colonisation of a new environment, can trigger transitions in both directions including back-and-forth transitions between antagonism and mutualism. Moreover, we find an evolutionary tipping point: a stable mutualism may break down to antagonism if the cost of either mutualistic service or discrimination ability gradually increases above a threshold beyond which this transition cannot be reversed by reducing costs again. Our study provides a new perspective on the evolution of biotic interactions and hence on the dynamic structure of ecological networks. ### Competing Interest Statement The authors have declared no competing interest.

I have worked on a lot of papers, this is my first preprint.

Analyzing large biobanks of genomic data requires a fresh look at our quality control metrics, especially as we move to NGS and more diverse populations.

You may be throwing out biologically important variants.

I would be happy to hear feedback.

#UKBiobank #HardyWeinbergEquilibrum
#statistics #statisticalgenetics #GWAS #genetics #preprint

medrxiv.org/content/10.1101/20

medRxiv · A reassessment of Hardy-Weinberg equilibrium filtering in large sample Genomic studiesHardy Weinberg Equilibrium (HWE) is a fundamental principle of population genetics. Adherence to HWE, using a p-value filter, is used as a quality control measure to remove potential genotyping errors prior to certain analyses. Larger sample sizes increase power to differentiate smaller effect sizes, but will also affect methods of quality control. Here, we test the effects of current methods of HWE QC filtering on varying sample sizes up to 486,178 subjects for imputed and Whole Exome Sequencing (WES) genotypes using data from the UK Biobank and propose potential alternative filtering methods. METHODS Simulations were performed on imputed genotype data using chromosome 1. WES GWAS (Genome Wide Association Study) was performed using PLINK2. RESULTS Our simulations on the imputed data from Chromosome 1 show a progressive increase in the number of SNPs eliminated from analysis as sample sizes increase. As the HWE p-value filter remains constant at p<1e-15, the number of SNPs removed increases from 1.66% at n=10,000 to 18.86% at n=486,178 in a multi-ancestry cohort and from 0.002% at n=10,000 to 0.334% at n=300,000 in a European ancestry cohort. Greater reductions are shown in WES analysis with a 11.91% reduction in analyzed SNPs in a European ancestry cohort n=362,192, and a 32.70% reduction in SNPs in a multi-ancestry dataset n=463,605. Using a sample size specific HWE p-value cutoff removes ∼2.25% of SNPs in the all ancestry cohort across all sample sizes, but does not currently scale beyond 300,000 samples. A hard cutoff of +/- 20% deviation from HWE produces the most consistent results and scales across all sample sizes but requires additional user steps. CONCLUSION Testing for deviance from HWE may still be an important quality control step in GWAS studies, however we demonstrate here that using an HWE p-value threshold that is acceptable for smaller sample sizes will be inappropriate for large sample studies due to an unnecessarily high number of variants removed prior to analysis. Rather than exclude variants that fail HWE prior to analysis it may be better to include all variants in the analysis and examine their deviation from HWE afterward. We believe that adjusting the cutoffs will be even more important for large whole genome sequencing results and more diverse population studies. KEY TAKEAWAYS ### Competing Interest Statement BB and AS are full time employees of DNAnexus, Inc. PG and DW are full time employees of Ariel Precision Medicine Inc ### Funding Statement Authors were compensated for time contributed to the study by their respective institutions. Their respective institutions also paid for any compute needed to complete experiments on the UKBRAP. ### Author Declarations I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained. Yes The details of the IRB/oversight body that provided approval or exemption for the research described are given below: This study I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals. Yes I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance). Yes I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable. Yes All data produced are available in supplementary material. * HWE : Hardy Weinberg Equilibrium GWAS : Genome Wide Association Study WES : Whole Exome Sequencing WGS : Whole Genome Sequencing MAF : Minor Allele Frequency SNP QC : quality control EO : European Only GSD : Gallstone Disease UKB : UK Biobank UKB RAP : UK Biobank Research Analysis Platform MHC : Major Histocompatibility Complex

Hi #ScienceMastodon.

#preLights (prelights.biologists.com) is a #preprint highlights service run by the biological #community & supported by #Co_Biologists allowing one to:

🔎 Sift through the high # of preprints.
💡 Highlight new thinking/techniques across the biological sciences.
📝 Comment on preprints.
👥 See comments & opinions of others.

We’ve moved over to our own instance @biologists.social, & hope that you’ll join us there for friendly chats between biologists!

biologists.social/about

preLightsHomepage - preLightsWelcome to preLights, the new preprint highlights service run by the biological community and supported by The Company of Biologists. Here, a team of scientists regularly review, highlight and comment on preprints they feel are of interest to the biological community.

Our Perspective #preprint on #DigitalContactTracing for #COVID, with diverse coauthors. An 80% success rate is needed at each of 6 points of failure to reduce R(t) by 26%. Singapore is the only country that got close this time, but the tech could be both transformational and safe next #pandemic – if we ignore prevailing self-serving narratives and instead heed the lessons-to-be-learned. arxiv.org/abs/2306.00873 1/4

arXiv.orgDigital contact tracing/notification for SARS-CoV-2: navigating six points of failureDigital contact tracing/notification was initially hailed as a promising strategy to combat SARS-CoV-2, but in most jurisdictions it did not live up to its promise. To avert a given transmission event, both parties must have adopted the tech, it must detect the contact, the primary case must be promptly diagnosed, notifications must be triggered, and the secondary case must change their behavior to avoid the focal tertiary transmission event. If we approximate these as independent events, achieving a 26% reduction in R(t) would require 80% success rates at each of these six points of failure. Here we review the six failure rates experienced by a variety of digital contact tracing/notification schemes, including Singapore's TraceTogether, India's Aarogya Setu, and leading implementations of the Google Apple Exposure Notification system. This leads to a number of recommendations, e.g. that tracing/notification apps be multi-functional and integrated with testing, manual contact tracing, and the gathering of critical scientific data, and that the narrative be framed in terms of user autonomy rather than user privacy.