Warsaw PhD School in Natural and BioMedical Sciences

Sekretariat:

phdoffice@warsaw4phd.eu

Last update: 03.10.2022 (changelog: The Human Protein Atlas, Bridge of Knowledge, Seeing Theory)

A list of interesting scientific ideas, tools, links to conferences and grants for young scientists (PhD students), which may be of general interest to a wide range of PhD students from many disciplines. The list will be updated on an ongoing basis.

[!] Additional information: An expanded list with additional sources on neuroscience/neuroimaging/data science topics is available on this GitHub page (you can receive direct notifications of updates after following).

Travel grants (available for PhD students):

Research grants (available for PhD students):

  • Neurohackademy: a summer school in neuroiming & data science, held at the University of Washington eScience Institute (organized yearly)
  • Neuromatch: Neuromatch Academy (computational techniques crucial both in academia and industry (3-week program)) & Neuromatch Conference (a conference for the computational neuroscience community) (organized yearly)
  • Google Summer of Code: a global, online program focused on bringing new contributors into open source software development (organized yearly)

On-line courses:

  • ReproNim Statistics Module: Statistical basis for neuroimaging analyses: the basics / Effect size and variation of effect sizes in brain imaging / P-values and their issues / Statistical power in neuroimaging and statistical reproducibility / The positive Predictive Value / Cultural and psychological issues
  • DataCamp: programming courses
  • Seeing Theory: a simple introduction to statistics and probability through the use of interactive visualizations (Brown University)

Predatory journals/publishers:

  • Think-Check-Submit: this international, cross-sector initiative aims to educate researchers, promote integrity, and build trust in credible research and publications
  • Beall’s List: expanded 2022: a list of predatory journals & trusted resources

General:

  • BIDS: Brain Imaging Data Structure; simple and intuitive way to organize and describe neuroimaging and behavioral data
  • Protocols.io: science methods, assays, clinical trials, operational procedures and checklists for keeping your protocols up do date as recommended by Good Laboratory Practice (GLP) and Good Manufacturing Practice (GMP)
  • Sample Consent Forms for neuroimaging research (EN/DE)
  • ADDI + AD Workbench: Alzheimer’s Disease Data Initiative; “a free data sharing platform, data science tools, funding opportunities, and global collaborations, ADDI is advancing scientific breakthroughs and accelerating progress towards new treatments and cures for AD and related dementias”
  • The Human Protein Atlas: a Swedish-based program initiated in 2003 with the aim to map all the human proteins
  • Bridge of Knowledge: a Polish-based system with collection of publications, studies, projects and a lot of other types of resources from a number of different subject areas (open-access)

Programming:

  • Dask: parallel computing with python
  • Docker: OS-level virtualization to deliver software in packages called containers

A list of platforms where you can find publicly available data for analysis and/or publish data from your project.

Platforms:

  • OpenNeuro: BIDS-compliant MRI, PET, MEG, EEG, and iEEG data
  • OpenNeuro PET: BIDS-compliant PET data
  • NeuroVault: unthresholded statistical maps, parcellations, and atlases produced by MRI and PET studies
  • GigaDB: GigaDB contains 2201 discoverable, trackable, and citable datasets that have been assigned DOIs and are available for public download and use
  • Machine learning datasets: a list of machine learning datasets from across the web

Interesting articles on reliability and progress in science.

  • Why Most Published Research Findings Are False (PLOS/ John Ioannidis) | Paper / Wiki: an essay written by John Ioannidis (Stanford School of Medicine); author argues that a large number of papers in medical research contain results that in fact cannot be replicated and are a false positive results
  • How scientists fool themselves – and how they can stop (Nature/ Regina Nuzzo) | Article / PDF: cognitive fallacies in research and debiasing techniques
  • Power failure: why small sample size undermines the reliability of neuroscience (Nature/ Katherine S. Button et al.) | Paper: low statistical power and its influence on true/false effects
  • Slowed canonical progress in large fields of science (PNAS/ Johan S. G. Chu & James A. Evans) | Paper: “Examining 1.8 billion citations among 90 million papers across 241 subjects, we find a deluge of papers does not lead to turnover of central ideas in a field, but rather to ossification of canon. Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion.”
  • False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant (Joseph P. Simmons & Leif D. Nelson) | Paper: “First, we show that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates.”
  • Revised standards for statistical evidence (PNAS/ Valen E. Johnson) | Paper: “The lack of reproducibility of scientific research undermines public confidence in science and leads to the misuse of resources when researchers attempt to replicate and extend fallacious research findings. (…) Modifications of common standards of evidence are proposed to reduce the rate of nonreproducibility of scientific research by a factor of 5 or greater.”
  • The file drawer problem and tolerance for null results (Robert Rosenthal) | Paper: “For any given research area, one cannot tell how many studies have been conducted but never reported. The extreme view of the “file drawer problem” is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results.”
  • Is there a large sample size problem? (Richard A. Armstrong) | Paper & The paradox of large samples (S. Kunte & A. P. Gore) Paper: statistical issues with large sample sizes