In the current study using 3D MRI
data in a larger sample (N = 5,388), multiple SNPs showed significant association with multiple facial phenotypes, even after an over-conservative Bonferroni correction.
Not exact matches
But before everything actually accelerates, the medical research industry will need three things: (1) A
larger sample size; (2) A way of letting people share their information
in a granular fashion and; (3) A place to put the terabytes of medical
data that ResearchKit generates.
The project is detailed
in the contract as a seven step process — with Kogan's company, GSR, generating an initial seed
sample (though it does not specify how
large this is here) using «online panels»; analyzing this seed training
data using its own «psychometric inventories» to try to determine personality categories; the next step is Kogan's personality quiz app being deployed on Facebook to gather the full dataset from respondents and also to scrape a subset of
data from their Facebook friends (here it notes: «upon consent of the respondent, the GS Technology scrapes and retains the respondent's Facebook profile and a quantity of
data on that respondent's Facebook friends»); step 4 involves the psychometric
data from the seed
sample, plus the Facebook profile
data and friend
data all being run through proprietary modeling algorithms — which the contract specifies are based on using Facebook likes to predict personality scores, with the stated aim of predicting the «psychological, dispositional and / or attitudinal facets of each Facebook record»; this then generates a series of scores per Facebook profile; step 6 is to match these psychometrically scored profiles with voter record
data held by SCL — with the goal of matching (and thus scoring) at least 2M voter records for targeting voters across the 11 states; the final step is for matched records to be returned to SCL, which would then be
in a position to craft messages to voters based on their modeled psychometric scores.
The scenario model is pre-populated with
data based on a
large sample of U.S. public companies (more than 2,500 companies) over a seven - year period (2004 - 2011), as compiled by BoardEx.1 To access the pre-populated model calculations, click the Calculations / Historical
data and Attrition
data tabs
in the Excel spreadsheet that you can download from this page.
However, compared to previous studies of regulation compliance that have utilised
data sets ranging
in duration from 63 hours [9] to 80 hours of programming [24], this was a relatively
large sample that was purposively selected to cover both school holidays and term time.
It also features the type of strong driving hypothesis,
large sample size, and consistent year - to - year results that we look for
in our
data - driven betting systems.
As with other neuropsychological testing tools, the value of the SAC
in concussion assessment is maximized when individual baseline test
data is available because, without such baselines, the athlete's postinjury performance on neuropsychological testing and other concussion assessment measures, such as the SAC, must be interpreted by comparison with a generalized «normal» based on a
large population
sample.
With the use of national
sampling weights,
data from these 16 cities were designed to be nationally representative of families with children born
in large cities
in the United States from 1998 through 2000.22
When the team looked for it
in data from the Human Microbiome Project, a
large - scale project to sequence the DNA of all the microbes that live
in and on our bodies, they found that it was present
in 73 per cent of all 466 faecal
samples.
«By acquiring a
large amount of
data, our system can significantly reduce the error involved
in analyzing the physiological status of a crop and the monitoring efficiency of crop growing conditions, without requiring repeated
sampling,» said Li.
Young scientists «might spend much of graduate school optimizing computer code for a
large physics experiment, or extracting
samples in a biology lab, or doing the statistical analyses on other people's
data,» Walsh and Lee write
in their email.
This is the
largest ever study estimating above and below - ground carbon loss from selective logging and ground level forest fires
in the tropics, based on
data from 70,000
sampled trees and thousands of soil, litter and dead wood
samples from 225 sites
in the eastern Brazilian Amazon.
In a just - published paper, astronomers used a sample of 40,000 galaxies in the COSMOS field, a large and contiguous patch of sky with deep enough data to look at galaxies very far away, and with accurate distance measurements to individual galaxie
In a just - published paper, astronomers used a
sample of 40,000 galaxies
in the COSMOS field, a large and contiguous patch of sky with deep enough data to look at galaxies very far away, and with accurate distance measurements to individual galaxie
in the COSMOS field, a
large and contiguous patch of sky with deep enough
data to look at galaxies very far away, and with accurate distance measurements to individual galaxies.
When the researchers expanded their search to include all the
data from the Human Microbiome Project, a
large - scale project to sequence the DNA of all the microbes that live
in and on our bodies, they found that the same virus was present
in 73 per cent of all 466 human faecal
samples.
«And the fact that we can get a
large sample size for specific types of surgeries allows us to narrow down our
data and be more precise
in our understanding of individual patient populations and procedures
in terms of risk.»
Thakar and the team looked at the Nationwide Inpatient
Sample, (NIS), the
largest publically available all - payer inpatient care database
in the United States, containing
data on more than seven million hospital stays each year.
From this survey
data, NASA's James Webb Space Telescope as well as
large ground - based observatories will be able to further characterize the targets, making it possible for the first time to study the masses, sizes, densities, orbits, and atmospheres of a
large cohort of small planets, including a
sample of rocky worlds
in the habitable zones of their host stars.
Unlike other solid tumors, there has been limited progress
in understanding the contribution of genetic risk factors to the development of uveal melanoma, researchers say, primarily due to the absence of comprehensive genetic
data from patients as the
large sample cohorts for this rare cancer type have not been available for research.
Employee - level
data was collected through a questionnaire distributed to all employees
in workplaces with fewer than 25 employees, and to a random
sample of 25 employees
in larger workplaces with more than 25 employees.
The replication crisis refers to a growing concern
in experimental psychology — and the
larger scientific community — about the drop
in studies able to confirm previous work with experiments that achieve the same results using the same methods, as well as the increased risk of
data manipulation
in studies with small
sample sizes.
Zeggini believes that collaborations will become increasingly important
in her field as researchers use ever -
larger sample sizes and generate ever -
larger volumes of genetic
data.
«Because the
data sample was so
large, it forced us to use statistically powerful analysis that could,
in turn, measure properties
in an unambiguous manner.
Scientists estimate that the experiment will result
in one of the
largest scientific
samples of
data ever, at least a few hundred petabytes — more than all the written works
in the history of the world, several times over.
The researchers from the University of South Carolina College of Pharmacy and School of Medicine discovered this new subtype by analyzing
data from 255 cervical cancer
samples in The Cancer Genome Atlas, a
large - scale federally funded project launched
in 2005 by the National Cancer Institute and National Human Genome Research Institute.
Researchers performed a retrospective analysis of the 2008 - 2013
data from the Nationwide Emergency Department (ED)
Sample, the
largest all - payer ED database
in the United States.
In order to contextualise the australopithecine and early Homo stature estimates and range of variability obtained from the footprints within a broader picture (Figure 12), and to compare them with a
larger sample, we extended our analysis to consistent
data based on skeletal elements, namely femurs (see Materials and methods for details).
Latest trends
in neuroimaging studies are focusing more on scalability issues both related to
larger data samples and cpu intensive computational methods.
«We will be able to link all of our group's discoveries using blood
samples and clinical
data from the extensive NHLBI - sponsored Multi-Ethnic Study of Atherosclerosis (MESA) cohort, a
large longitudinal cohort established
in 2000 to study factors contributing to cardiovascular disease progression.
Using
large - scale empirical and simulated
data sets, we found that the
sample sizes used
in the HapMap project are sufficient to capture common variation, but that performance declines substantially for variants with minor allele frequencies of < 5 %.
Combining the existing ISO, Spitzer, VLT and Keck ice
data results
in a
large sample of ice sources (\ sime80) that span all stages of star formation and a
large range of protostellar luminosities (< 0.1 - 105 L \ odot).
A substantial investment has been made
in the generation of
large public resources designed to enable the identification of tag SNP sets, but
data establishing the adequacy of the
sample sizes used are limited.
A challenge for the scientific community is to find consensus
in the methodologies used to scale CH4 from
sample locations to the
larger domain and to integrate information obtained from these various
data sets.
The CRS4's Bioinformatics laboratory has access to
large clinical
sample sets and genomic
data and closely collaborates with hospitals to support clinical researchers
in translating basic research findings into clinical applications.
Research on vaccine biomarkers, including
in - depth comparative analysis of
data from different platforms,
large - scale RNA sequencing, harmonisation of standard operating procedures (SOPs) for
sample, microarray and
data analysis, as well as transcriptome mapping (more than 1400
samples analysed).
Data shared through RD ‐ Connect is accessible beyond the usual institutional and national boundaries and researchers across the world can benefit from the opportunity to work with others with an interest in the same field, pool data to create larger cohorts, find confirmatory cases, and access samples for further st
Data shared through RD ‐ Connect is accessible beyond the usual institutional and national boundaries and researchers across the world can benefit from the opportunity to work with others with an interest
in the same field, pool
data to create larger cohorts, find confirmatory cases, and access samples for further st
data to create
larger cohorts, find confirmatory cases, and access
samples for further study.
Due to the availability of
data from three observing runs separated by ~ 10 and 1 month timescales, we are able to demonstrate clear evidence for evolution of the photometric amplitudes, and hence spot patterns, over the 10 month gap, although we are not able to constrain the timescales for these effects
in detail due to limitations imposed by the
large gaps
in our
sampling, preventing use of the phase information.
Running as facility of this size requires a massive amount of support and we work closely with the library preparation team that supplies
large numbers of DNA templates
in a from ready to be sequenced, the Institute's IT team that maintains the extensive amount of compute and storage infrastructure necessary, sequencing informatics which develops software tools to process, analyse, store and track all the
data, projects and
samples for the Illumina pipeline and the development team which invents novel and improved protocols to take better advantage of this new technology.
The qualitative
data reported
in this article were collected as part of a
larger research project which surveyed a national random
sample of users of a
large online dating site (N = 349) about relational goals, honesty and self - disclosure, and perceived success
in online dating.
As advocated by the 22 - member panel chaired by former Gov. Lamar Alexander of Tennessee, both bills would expand the Congressionallymandated National Assessment of Educational Progress to provide state - by - state
data, measure learning
in more core subjects, include out - of - school 17 - year - olds, and provide a
larger sampling of private - school students.
In planning for the 1988 survey — which provides the only federal data on civil - rights compliance in education — O.C.R. has quietly inched back toward its old method, rescinding a change that allowed large districts to sample only certain schools and designing the sample to include more districts that have not been surveyed recentl
In planning for the 1988 survey — which provides the only federal
data on civil - rights compliance
in education — O.C.R. has quietly inched back toward its old method, rescinding a change that allowed large districts to sample only certain schools and designing the sample to include more districts that have not been surveyed recentl
in education — O.C.R. has quietly inched back toward its old method, rescinding a change that allowed
large districts to
sample only certain schools and designing the
sample to include more districts that have not been surveyed recently.
The Teacher Follow - up Survey of the Schools and Staffing Survey (SASS - TFS) provides
data designed to examine teacher turnover, and it has a much
larger sample, 706 former teachers currently working
in nonteaching jobs.
Only a few thousand such students have been studied, so estimates are not from as
large a
sample as one would like, and the cities
in these studies may differ from Milwaukee
in some respects, but the
data provide as direct an estimate as can be obtained.
As part of the
larger project, I gathered
data on the names and zip codes of school district employees
in a stratified
sample of 70 California school districts, all of them unionized, and I matched these names to county voter files to get each employee's voting history.
Our major findings were based on analyses of TAAS results for all the schools
in Texas plus all the 1992 - 1998
data from NAEP's
large and carefully constructed
samples for Texas and the nation.
Activities to help learners of secondary mathematics to interpret frequency graphs, cumulative frequency graphs and box and whisker plots for
large samples and to see how a
large number of
data points can result
in the graph being approximated by a continuous distribution.
First, using
data from the German Microcensus, we show that the same pattern holds
in much
larger and more recent
samples and
in estimation within occupational groups that excludes occupations where brawn is important.
After analyzing the content of
samples from local wells for a project about the quality of drinking water, fifth - and sixth - grade students at Shutesbury Elementary School display their information on a
large map of the town and look for patterns
in the
data.
Recent
data from a study we are doing here at Wellesley Centers for Women with a
large, racially diverse
sample of low - income students
in a
large urban school district found that 95 percent of students, both boys and girls, aspired to attend college when asked
in 9th and 10th grade.
The single - school configuration helped the teachers easily administer common assessments, obtain assessment
data from a
larger sample size (3 biology classrooms), and use the information from their collaborative study of the assessments results to modify instruction
in a timely manner.
- Gathered the
largest sampling of comparable, non-aggregated attendance
data in the state — a fact not lost on California Attorney General Kamala Harris, who based her office's report on the state's learning crisis on A2A
data;