Not exact matches
The project is detailed
in the contract as a seven
step process — with Kogan's company, GSR, generating an initial seed sample (though it does not specify how large this is here) using «online panels»; analyzing this seed training
data using its own «psychometric inventories» to try to determine personality categories; the next
step is Kogan's personality quiz app being deployed on Facebook to gather the full dataset from respondents and also to scrape a subset of
data from their Facebook friends (here it notes: «upon consent of the respondent, the GS Technology scrapes and retains the respondent's Facebook profile and a quantity of
data on that respondent's Facebook friends»);
step 4 involves the psychometric
data from the seed sample, plus the Facebook profile
data and friend
data all being run through proprietary modeling algorithms — which the contract specifies are based on using Facebook likes to predict personality scores, with the stated aim of predicting the «psychological, dispositional and / or attitudinal facets of each Facebook record»; this then generates a series of scores per Facebook profile;
step 6 is to match these psychometrically scored profiles with voter record
data held by SCL — with the goal of matching (and thus scoring)
at least 2M voter records for targeting voters across the 11 states; the final
step is for matched records to be returned to SCL, which would then be
in a position to craft messages to voters based on their modeled psychometric scores.
Furthermore, take a look
at Jey Pandian's post,
in which he details a 10 -
step process for using search
data to build more specific and practical buyer personas.
«Quantifying the sulfur dioxide bull's - eyes is a two -
step process that would not have been possible without two innovations
in working with the satellite
data,» said co-author Nickolay Krotkov, an atmospheric scientist
at NASA's Goddard Space Flight Center
in Greenbelt, Maryland.
Any results that are reported to constitute a blinded, independent validation of a statistical model (or mathematical classifier or predictor) must be accompanied by a detailed explanation that includes: 1) specification of the exact «locked down» form of the model, including all
data processing steps, algorithm for calculating the model output, and any cutpoints that might be applied to the model output for final classification, 2) date on which the model or predictor was fully locked down
in exactly the form described, 3) name of the individual (s) who maintained the blinded
data and oversaw the evaluation (e.g., honest broker), 4) statement of assurance that no modifications, additions, or exclusion were made to the validation
data set from the point
at which the model was locked down and that neither the validation
data nor any subset of it had ever been used to assess or refine the model being tested
Some might see this move to be a
step backwards, but an essential one, although if you were to look
at it
in a positive manner,
at least it helps streamline the
process, and if you were to go through the entire migration
process without losing any
data or notice anything amiss, it should not matter too much to the end user
in the long run, right?
I realize this is more of an engineering scenario, but we meticulously archive all of our code and
data using an advanced version control system so that every
step in the
process can be revisited and evaluated
at anytime there is a question about how we got to our final results.
processing is necessary for the performance of a contract to which the
data subject is party or
in order to take
steps at the request of the
data subject prior to entering into a contract.
processing is necessary for the performance of a contract to which the
data subject is party or
in order to take
steps at the...
He has also undertaken training on lean methodologies, so he is adept
at identifying waste
in law firm / legal team
processes, and re-designing those
processes to trim out unnecessary
steps and reduce complexity
in legal
data systems.
Besides the payment term, the Consent Decree includes provisions requiring Brown & Brown to: take affirmative
steps to avoid pregnancy discrimination
in the future; create and adopt a pregnancy discrimination policy (to be submitted for approval to the EEOC); distribute copies to every employee and manager, and to every applicant; provide two hours of
in - person training on gender discrimination, including pregnancy discrimination, to every manager involved
in the hiring
process; retain,
at the company's cost, a «subject matter expert» (to be agreed upon by the EEC) on sex discrimination to conduct those sessions; provide to non-managers one hour of video or webinar training on the same topic (s); make yearly reports to the EEOC for two years regarding further complaints of pregnancy discrimination, if any; post a Notice of the consent decree
at the facility; and retain all documents and
data related to compliance with the Consent Decree.
The project is detailed
in the contract as a seven
step process — with Kogan's company, GSR, generating an initial seed sample (though it does not specify how large this is here) using «online panels»; analyzing this seed training
data using its own «psychometric inventories» to try to determine personality categories; the next
step is Kogan's personality quiz app being deployed on Facebook to gather the full dataset from respondents and also to scrape a subset of
data from their Facebook friends (here it notes: «upon consent of the respondent, the GS Technology scrapes and retains the respondent's Facebook profile and a quantity of
data on that respondent's Facebook friends»);
step 4 involves the psychometric
data from the seed sample, plus the Facebook profile
data and friend
data all being run through proprietary modeling algorithms — which the contract specifies are based on using Facebook likes to predict personality scores, with the stated aim of predicting the «psychological, dispositional and / or attitudinal facets of each Facebook record»; this then generates a series of scores per Facebook profile;
step 6 is to match these psychometrically scored profiles with voter record
data held by SCL — with the goal of matching (and thus scoring)
at least 2M voter records for targeting voters across the 11 states; the final
step is for matched records to be returned to SCL, which would then be
in a position to craft messages to voters based on their modeled psychometric scores.