Instagram photos reveal predictive markers of depression

Using Instagram data from 166 individuals, we applied machine learning tools to successfully identify markers of depression. Statistical features were computationally extracted from 43,950 participant Instagram photos, using color analysis, metadata components, and algorithmic face detection. Resulting models outperformed general practitioners’ average unassisted diagnostic success rate for depression. These results held even when the analysis was restricted to posts made before depressed individuals were first diagnosed. Human ratings of photo attributes (happy, sad, etc.) were weaker predictors of depression, and were uncorrelated with computationally-generated features. These results suggest new avenues for early screening and detection of mental illness.

The advent of social media presents a promising new opportunity for early detection and intervention in psychiatric disorders.Predictive screening methods have successfully analyzed online media to detect a number of harmful health conditions (111).All of these studies relied on text analysis, however, and none have yet harnessed the wealth of psychological data encoded in visual social media, such as photographs posted to Instagram.In this report, we introduce a methodology for analyzing photographic data from Instagram to predictively screen for depression.
There is good reason to prioritize research into Instagram analysis for health screening.Instagram members currently contribute almost 100 million new posts per day (12), and Instagram's rate of new users joining has recently outpaced Twitter, YouTube, LinkedIn, and even Facebook (13).A nascent literature on depression and Instagram use has so far either yielded results that are too general or too laborintensive to be of practical significance for predictive analytics (14,15).In our research, we incorporated an ensemble of computational methods from machine learning, image processing, and other datascientific disciplines to extract useful psychological indicators from photographic data.Our goal was to successfully identify and predict markers of depression in Instagram users' posted photographs.
Hypothesis 1: Instagram posts made by individuals diagnosed with depression can be reliably distinguished from posts made by healthy individuals, using only measures extracted computationally from posted photos and associated metadata.

Photographic markers of depression
Photographs posted to Instagram offer a vast array of features that might be analyzed for psychological insight.The content of photographs can be coded for any number of characteristics: Are there people present?Is the setting in nature or indoors?Is it night or day?Image statistical properties can also be evaluated at a perpixel level, including values for average color and brightness.Instagram metadata offers additional information: Did the photo receive any comments?How many "Likes" did it get?Finally, platform activity measures, such as usage and posting frequency, may also yield clues as to an Instagram user's mental state.We incorporated only a narrow subset of possible features into our predictive models, motivated in part by prior research into the relationship between mood and visual preferences.
In studies associating mood, color, and mental health, healthy individuals identified darker, grayer colors with negative mood, and generally preferred brighter, more vivid colors (1619).By contrast, depressed individuals were found to prefer darker, grayer colors (17).In addition, Barrick,Taylor,& Correa (19) found a positive correlation between selfidentification with depression and a tendency to perceive one's surroundings as gray or lacking in color.These findings motivated us to include measures of hue, saturation, and brightness in our analysis.We also tracked the use of Instagram filters, which allow users to modify the color and tint of a photograph.
Depression is strongly associated with reduced social activity (20,21).As Instagram is used to share personal experiences, it is reasonable to infer that posted photos with people in them may capture aspects of a user's social life.On this premise, we used a face detection algorithm to analyze Instagram posts for the presence and number of human faces in each photograph.We also counted the number of comments and likes each post received as measures of community engagement, and used posting frequency as a metric for user engagement.

Early screening applications
Hypothesis 1 is a necessary first step, as it addresses an unanswered basic question: Is depression detectable in Instagram posts?On finding support for Hypothesis 1, a natural question arises: Is depression detectable in Instagram posts, before the date of first diagnosis ?After receiving a depression diagnosis, individuals may come to identify with their diagnosis (22,23).Individuals' selfportrayal on social media may then be influenced by this identification.It is possible that a successful predictive model, trained on the entirety of depressed Instagram users' posting histories, might not actually detect depressive signals, per se, but rather purposeful content choices intended to convey a depressive condition.Training a model using only posts made prior to the date of first diagnosis addresses this potential confounding factor.If support is found for Hypothesis 2, this would not only demonstrate a methodological advance for researchers, but also serve as a proofofconcept for future healthcare applications.As such, we benchmarked the accuracy of our model against the ability of general practitioners to correctly diagnose depression as shown in a metaanalysis by Mitchell,Vaze,and Rao (24).The authors analyzed 118 studies that evaluated general practitioners' abilities to correctly diagnose depression in their patients, without assistance from scales, questionnaires, or other measurement instruments.Out of 50,371 patient outcomes included across the pooled studies, 21.9% were actually depressed, as evaluated separately by psychiatrists or validated interviewbased measures conducted by researchers.General practitioners were able to correctly rule out depression in nondepressed patients 81% of the time, but only diagnosed depressed patients correctly 42% of the time.We refer to these metaanalysis findings (24) as a comparison point to evaluate the usefulness of our models.
A major strength of our proposed models is that their features are generated using entirely computational means pixel analysis, face detection, and metadata parsing which can be done at scale, without additional human input.It seems natural to wonder whether these machineextracted features pick up on similar signals that humans might use to identify mood and psychological condition, or whether they attend to wholly different information.A computer may be able to analyze the average saturation value of a million pixels, but can it pick out a happy selfie from a sad one?Are machine learning and human opinion sensitive to the same indicators of depression?Furthermore, insight into these issues may help to frame our results in the larger discussion around human versus machine learning, which occupies a central role in the contemporary academic landscape.
To address these questions, we solicited human assessments of the Instagram photographs we collected.We asked new participants to evaluate photos on four simple metrics: happiness, sadness, interestingness, and likability.These ratings categories were intended to capture human impressions that were both intuitive and quantifiable, and which had some relationship to established depression indicators.DSMIV (20) criteria for Major Depressive Disorder includes feeling sad as a primary criterion, so sadness and happiness seemed obvious candidates as ratings categories.Epstein et al. (25) found depressed individuals "had difficulty reconciling a selfimage as an 'outgoing likeable person'", which prompted likability as an informative metric.We hypothesized that human raters should find photographs posted by depressed individuals to be sadder, less happy, and less likable, on average.Finally, we considered interestingness as a novel factor, without a clear directional hypothesis.

Hypothesis 3a:
Human ratings of Instagram posts on common semantic categories can distinguish between posts made by depressed and healthy individuals.Hypothesis 3b : Human ratings are positively correlated with computationallyextracted features.
If human and machine predictors show positive correlation, we can infer that each set of 1 features reveals similar indicators of depression.In this case, the strength of the human model simply suggests whether it is better or worse than the machine model.On the other hand, if machine and human features show little or no correlation, then regardless of human model performance, we would know that the machine features are capable of screening for depression, but use different information signals than what are captured by the affective ratings categories.

Method Data Collection
Data collection was crowdsourced using Amazon's Mechanical Turk (MTurk) crowdwork platform .Separate surveys were created for depressed and healthy individuals.In 2 the depressed survey, participants were invited to complete a questionnaire that involved passing a series of inclusion criteria, responding to a standardized clinical depression survey, answering questions related to demographics and history of depression, and sharing social media history.We used the CESD (Center for Epidemiologic Studies Depression Scale) questionnaire to screen participant depression levels (26).CESD assessment quality has been demonstrated to be onpar with other depression inventories, including the Beck Depression Inventory and the Kellner Symptom Questionnaire (27,28).Healthy participants were screened to ensure they were active users of Instagram, and had no history of depression.
Qualified participants were asked to share their Instagram usernames and history.An app embedded in the survey allowed participants to securely log into their Instagram accounts and agree to share their data.Upon securing consent, we made a onetime collection of participants' entire Instagram posting history.In total we collected 43,950 photographs from 166 Instagram users.
We asked a different set of MTurk crowdworkers to rate the Instagram photographs collected.This new task asked participants to rate a random selection of 20 photos from the data we collected.Raters were asked to judge how interesting, likable, happy, and sad each photo seemed, on a continuous 05 scale.Each photo was rated by at least three different raters, and ratings were averaged across raters .Raters were not informed that photos were from Instagram, 3 nor were they given any information about the study participants who provided the photos, including mental health status.Each ratings category showed good interrater agreement.
Only a subset of participant Instagram photos were rated (N=13,184).We limited ratings data to a subset because this task was timeconsuming for crowdworkers, and so proved a costly form of data collection.For the depressed sample, ratings were only made for photos posted within a year in either direction of the date of first depression diagnosis.Within this subset, for each user the nearest 100 posts prior to the diagnosis date were rated.For the healthy sample, the most recent 100 photos from each user's date of participation in this study were rated.

Participant safety and privacy
Data privacy was a concern for this study.Strict anonymity was nearly impossible to guarantee to participants, given that usernames and personal photographs posted to Instagram often contain identifiable features.We made sure participants were informed of the risks of being personally identified, and assured them that no data with personal identifiers, including usernames, would be made public or published in any format.

Improving data quality
We employed several quality assurance measures in our data collection process to reduce noisy and unreliable data.Our surveys were only visible to MTurk crowdworkers who had completed at least 100 previous tasks with a minimum 95% approval rating; MTurk workers with this level of experience and approval rating have been found to provide reliable, valid survey responses (29).We also restricted access to only American IP addresses, as MTurk data collected from outside the United States are generally of poorer quality (30).All participants were only permitted to take the survey once.
We excluded participants who had successfully completed our survey, but who had a lifetime total of fewer than five Instagram posts.We also excluded participants with CESD scores of 21 or lower.Studies have indicated that a CESD score of 22 represents an optimal cutoff for identifying clinically relevant depression across a range of age groups and circumstances (31,32).

Feature extraction
Several different types of information were extracted from the collected Instagram data.We used total posts per user, per day, as a measure of user activity.We gauged community reaction by counting the number of comments and "likes" each posted photograph received.Face detection software was used to determine whether or not a photograph contained a human face, as well as count the total number of faces in each photo, as a proxy measure for participants' social activity levels .Pixellevel averages were computed for Hue, Saturation, and Value 4 (HSV), three color properties commonly used in image analysis.Hue describes an image's coloring on the light spectrum (ranging from red to blue/purple).Lower hue values indicate more red, and higher hue values indicate more blue.Saturation refers to the vividness of an image.Low saturation makes an image appear grey and faded.Value refers to image brightness.Lower brightness scores indicate a darker image.See Fig. 1 for a comparison of high and low HSV values.We also checked metadata to assess whether an Instagramprovided filter was applied to alter the appearance of a photograph.Collectively, these measures served as the feature set in our primary model.For the separate model fit on ratings data, we used only the four ratings categories (happy, sad, likable, interesting) as predictors.

Units of observation
In determining the best time span for this analysis, we encountered a difficult question: When and for how long does depression occur?A diagnosis of depression does not indicate the persistence of a depressive state for every moment of every day, and to conduct analysis using an individual's entire posting history as a single unit of observation is therefore rather specious.At the other extreme, to take each individual photograph as units of observation runs the risk of being too granular.DeChoudhury et al. (5) looked at all of a given user's posts in a single day, and aggregated those data into perperson, perday units of observation.We adopted this precedent of "userdays" as a unit of analysis .

Statistical framework
We used Bayesian logistic regression with uninformative priors to determine the strength of individual predictors.Two separate models were trained.The Alldata model used all collected data to address Hypothesis 1.The Prediagnosis model used all data collected from healthy participants, but only prediagnosis data from depressed participants, to address Hypothesis 2. We also fit an "interceptonly" model, in which all predictors are zeroweighted to simulate a model under a null hypothesis.Bayes factors were used to assess model fit.Details on Bayesian estimation, model optimization, and diagnostic checks are available in SI Appendix IV, V, and VII.Frequentist regression output is provided for comparison in Appendix VI.
We also employed a suite of supervised machine learning algorithms to estimate the predictive capacity of our models.We report prediction results only from the bestperforming algorithm, a 1200tree Random Forests classifier.As an informal benchmark for comparison, we present general practitioners' unassisted diagnostic accuracy as reported in Mitchell, Vaze, and Rao (MVR) (24) . 6

Results
Both Alldata and Prediagnosis models were decisively superior to a null model .Alldata predictors were significant with 99% probability.57.5; Prediagnosis and Alldata confidence levels were largely identical, with two exceptions: Prediagnosis Brightness decreased to 90% confidence, and Prediagnosis posting frequency dropped to 30% confidence, suggesting a null predictive value in the latter case.
Increased hue, along with decreased brightness and saturation, predicted depression.This means that photos posted by depressed individuals tended to be bluer, darker, and grayer (see Fig. 2).The more comments Instagram posts received, the more likely they were posted by depressed participants, but the opposite was true for likes received.In the Alldata model, higher posting frequency was also associated with depression.Depressed participants were more likely to post photos with faces, but had a lower average face count per photograph than healthy participants.Finally, depressed participants were less likely to apply Instagram filters to their posted photos.Our best Alldata machine learning classifier, averaged over five randomized iterations, improved over MVR general practitioner accuracy on most metrics.Compared with MVR results, the Alldata model was less conservative (lower specificity) but better able to positively identify observations from depressed individuals (higher recall).Given 100 observations, our model correctly identified 70% of all depressed cases (n=37), with a relatively low number of false alarms (n=23) and misses (n=17).
Prediagnosis predictions showed improvement over the MVR benchmark on precision and specificity.The Prediagnosis model found only about a third of actual depressed observations, but it was correct most of the time when it did assign a depressed label.By comparison, although MVR general practitioners discovered more true cases of depression, they were more likely than not to misdiagnose healthy subjects as depressed.Out of the four predictors used in the human ratings model (happiness, sadness, likability, interestingness), only the sadness and happiness ratings were significant predictors of depression.Depressed participants' photos were more likely to be sadder and less happy than those of healthy participants.Ratings assessments generally showed strong patterns of correlation with one another, but exhibited extremely low correlation with computational features.The modest positive correlation of humanrated happiness with the presence and number of faces in a photograph was the only exception to this trend.Correlation matrices for all models are available in Appendix IX.

Discussion
The present study employed machine learning techniques to screen for depression using photographs posted to Instagram.Our results supported Hypothesis 1, that markers of depression are observable in Instagram user behavior, and Hypothesis 2, that these depressive signals are detectable in posts made even before the date of first diagnosis.Human ratings proved capable of distinguishing between Instagram posts made by depressed and healthy individuals (Hypothesis 3a), but showed little or no correlation with most computational features (Hypothesis 3b).Our findings establish that visual social media data are amenable to analysis of affect using scalable, computational methods.One avenue for future research might integrate textual analysis of Instagram posts' comments, captions, and tags.Considering the early success of textual analysis in detecting psychological signals on social media (5,33), the modeling of textual and visual features together could prove superior to either medium on its own.
Our model showed considerable improvement over the ability of unassisted general practitioners to correctly diagnose depression.On average, more than half of general practitioners' depression diagnoses were false positives (24).By comparison, the majority of both Alldata and Prediagnosis depression classifications were correct.As false diagnoses are costly for both healthcare programs and individuals, this improvement is noteworthy.Health care providers may be able to improve quality of care and better identify individuals in need of treatment based on the simple, lowcost methods outlined in this report.Given that mental health services are unavailable or underfunded in many countries (34), this computational approach, requiring only patients' digital consent to share their social media histories, may open avenues to care which are currently difficult or impossible to provide.
Although our Prediagnosis prediction engine was rather conservative, and tended to classify most observations as healthy, its accuracy likely represents a lower bound on performance.Ideally, we would have used the Alldata classifier to evaluate the Prediagnosis data, as the Alldata model was trained on a much larger dataset.Since the Prediagnosis data constituted a subset of the full dataset, applying the Alldata model to Prediagnosis observations would have artificially inflated accuracy, due to information leakage between training and test data.Instead, we trained a separate classifier using training and test partitions contained within the Prediagnosis data.This left the Prediagnosis model with considerably fewer data points to train on, resulting in weaker predictive power.As a result, it is likely that Prediagnosis accuracy scores understate the method's true capacity.
Regarding the strength of specific predictive features, some results matched common perceptions regarding the effects of depression on behavior.Photos posted to Instagram by depressed individuals were more likely to be bluer, grayer, darker, and receive fewer likes.Depressed Instagram users in our sample had an outsized preference for filtering out all color from posted photos, and showed an aversion to artificially lightening photos, compared to healthy users.These results are congruent with the literature linking depression and a preference for darker, bluer, and monochromatic colors (1619).
Other, seemingly intuitive, relationships failed to emerge.For example, the sadness of a photo, and the extent to which it is bluer, darker, and grayer than other photos, seem like close semantic matches.Despite both being strong predictors of depression, however, sadness ratings and color were statistically unrelated.
Algorithmic face detection yielded intriguing results.Depressed users were more likely to post photos with faces, but they tended to post fewer faces per photo.Fewer faces may be an oblique indicator that depressed users interact in smaller social settings, which would be in accordance with research linking depression to reduced social interactivity (5,20,21).That depressed Instagram users posted more photos with faces overall, however, offers less clear interpretation.Depressed individuals have been shown to use more selffocused language (35), and it may be that this selffocus extends to photographs, as well.If so, it may be that the abundance of lowfacecount photos posted by depressed users are, in fact, selfportraits.This "sad selfie" hypothesis remains untested.
A limitation of these findings concerns the nonspecific use of the term "depression" in the data collection process.We acknowledge that depression describes a general clinical state, and is frequently comorbid with other conditions.It is possible that a specific diagnostic class is responsible for driving the observed results, and subsequent research should adjust questionnaires to acquire specific diagnostic information.Additionally, it is possible that our results are specific to individuals who received clinical diagnoses.Current perspectives on depression treatment indicate that people who are "wellinformed and psychologically minded, experience typical symptoms of depression and little stigma, and have confidence in the effectiveness of treatment, few concerns about side effects, adequate social support, and high selfefficacy" seek out mental health services (25).The intersection of these qualities with typical Instagram user demographics suggests caution in making broad inferences about depression, based on our findings.
As the methods employed in this research provide a tool for inferring personal information about individuals, two points of caution should be considered.First, data privacy and ethical research practices are of particular concern, given recent admissions that individuals' social media data were experimentally manipulated or exposed without permission (36,37).It is perhaps reflective of a current general skepticism towards social media research that, of the 509 individuals who began our survey, 221 (43%) refused to share their Instagram data, even after we provided numerous privacy guarantees.Future research should prioritize establishing confidence among experimental participants that their data will remain secure and private.Second, data trends often change over time, leading sociotechnical models of this sort to degrade without frequent calibration (38).The findings reported here should not be taken as enduring facts, but rather as a methodological foundation upon which to build and refine subsequent models.
Paired with a commensurate focus on upholding data privacy and ethical analytics, the present work may serve as a blueprint for effective mental health screening in an increasingly digitalized society.More generally, these findings support the notion that major changes in individual psychology are transmitted in social media use, and can be identified via computational methods.

II. Face Detection
We used an elementary face detection script, based on an open source demonstration (https://gist.github.com/dannguyen/cfa2fb49b28c82a1068f).The main adjustment we made from the open source demo was to run through the detection loop twice, using two differing scale factors.A single scale factor had difficulty finding both small and large faces.The mean difference in counted faces (detected faces minus actual faces), indicated that the algorithm slightly undercounted the number of faces in photos, for both depressed participants as well as healthy participants .In both groups, μ − .015,σ .
= 0 = 2 the algorithm undercounted by less than a single face, on average.

III. Summary statistics
All data collection took place between February 1, 2016 and April 6, 2016.Across both depressed and healthy groups, we collected data from 166 Instagram users, and analyzed 43,950 posted photographs.The mean number of posts per user was 264.76 (SD=396.06).This distribution was skewed by a smaller number of frequent posters, as evidenced by a median value of just 122.5 posts per user.
Among depressed participants, 84 individuals successfully completed participation and provided access to their Instagram data.Imposing the CESD cutoff reduced the number of viable participants to 71.The mean age for viable participants was 28.8 years (SD=7.09),with a range of 19 to 55 years.Dates of participants' first depression diagnoses ranged from February 2010 to January 2016, with nearly all diagnosis dates (90.1%) occurring in the period 20132015.
Among healthy participants, 95 participants completed participation and provided access to their Instagram data.The mean age for this group was 30.7 years, with a range of 19 to 53 years, and 65.3% of respondents were female.(Gender data were not collected for the depressed sample.)Alldata model data consisted of participants' entire Instagram posting histories, consisted of 43,950 Instagram posts (24,811 depressed) over 166 individuals (71 depressed).Aggregation by userdays compressed into 24,713 observations (13,230 depressed).Observations from depressed participants accounted for 53.4% of the entire dataset.
Prediagnosis model data used only Instagram posts from depressed participants made prior to the date of first depression diagnosis, along with the same full dataset from healthy participants as used in the Alldata model.These data consisted of a total of 32,311 posts (13,192 depressed).There were 18,513 aggregatedunit observations in total (7,030 depressed).Observations from depressed participants accounted for 38% of this dataset.

Bayesian logistic regression
A Bayesian framework avoids many of the inferential challenges of frequentist null hypothesis significance testing, including reliance on pvalues and confidence intervals, both of which are subject to frequent misuse and misunderstanding (3942).For comparison, results from frequentist logistic regression output are included below; both methods are largely in agreement.
Logistic regression was conducted using the MCMClogit function from the R package MCMCpack (43).This function asserts a model of the following form : With the inverse link function: And a multivariate Normal prior on : We selected "uninformative" priors for all parameters in , with While , .0001.b 0 = 0 B 0 = 0 generally it is preferable to specify Bayesian priors, in this setting our parameters of interest were entirely novel, and so were not informed by prior literature or previous testing.
The MCMClogit() function employs a Metropolis algorithm to perform Markov Chain Monte Carlo (MCMC) simulations.The Instagram model simulation used two MCMC chains of 100,000 iterations with a burnin of 10,000 and no thinning.The use of thinning for achieving higherprecision estimates from posterior samples is questionable when compared to simply running longer chains (44).While no best practice has been established for how long an unthinned chain should be, Christensen et al. (45) advised: "Unless there is severe autocorrelation, e.g., high correlation with, say [lag]=30, we don't believe that thinning is worthwhile".In our MCMC chains, we observed low autocorrelation at a lag of 30, and so felt confident in foregoing thinning.For comparison, we also ran a 100,000iteration chain, thinned to every 10th iteration, with a burnin of 5,000.While autocorrelation was noticeably reduced at shorter lags, this chain yielded nearidentical parameter estimates from the posterior.
Recall that Bayesian regression coefficients are not assigned pvalues or any other significance measures conventional in frequentist nullhypothesis significance testing (NHST).We have provided Highest Posterior Density Intervals (HPDIs) for the highest probability at which the interval excludes zero as a possible coefficient value.For example, if a 99% HPDI is reported, it means that, based on averaged samples from the simulated joint posterior distribution, the coefficient in question has a 99% probability of being nonzero.References to variable "significance" in the Results section relate only to the probability that a variable's parameter estimate is nonzero, eg."Variable X was significant with 99% probability".Bayes factors were used to assess model fit.Markov Chain Monte Carlo (MCMC) chains showed good convergence across all estimated parameters on every fitted model.In all models, GelmanRubin diagnostics (47) indicated simulation chain convergence, with point estimates of 1.0 for each parameter.Geweke diagnostics (48) also indicated postburnin convergence.Autocorrelation was observed within acceptable levels.Trace, density, and autocorrelation plots for all models are presented in SI Appendix V.

Machine learning models
We employed a suite of supervised machine learning algorithms to estimate the predictive capacity of our models.In a supervised learning paradigm, parameter weights are determined by training on a labeled subset of the total available data ("labeled" here means that the response classes are exposed).Fitted models are then used to predict class membership for each observation in the remaining unlabeled "holdout" data.All of our machine learning classifiers were trained on a randomlyselected 70% of total observations, and tested on the remaining 30%.We employed stratified fivefold crossvalidation to optimize hyperparameters, and averaged final model output metrics over five separate randomized runs.

X. Ratings interrater agreement
Rater agreement was measured by randomly selecting two raters from each photo, and computing Pearson's productmoment correlation coefficient from the resulting vectors.To mitigate sampling bias, we ran a fivefold iteration of this process and averaged the resulting coefficients.Rater agreement showed positive correlations across all ratings categories ( for all values shown): , , , e 8 p < 1 − 3 39 r happy = .

Hypothesis 2 :
Instagram posts made by depressed individuals prior to the date of first clinical diagnosis can be reliably distinguished from posts made by healthy individuals.

Fig. 1 .
Fig. 1.Comparison of HSV values.Right photograph has higher Hue (bluer), lower Saturation (grayer), and lower Brightness (darker) than left photograph.Instagram photos posted by depressed individuals had HSV values shifted towards those in the right photograph, compared with photos posted by healthy individuals.

Fig. 2 .
Fig. 2. Magnitude and direction of regression coefficients in Alldata (N=24,713) and Prediagnosis (N=18,513) models.Xaxis values represent the adjustment in odds of an observation belonging to depressed individuals, per standardized unit increase of each predictive variable.Odds were generated by exponentiating logistic regression logodds coefficients.

Fig. 3 .
Fig. 3. Instagram filter usage among depressed and healthy participants.Bars indicate difference between observed and expected usage frequencies, based on a Chisquared analysis of independence.Blue bars indicate disproportionate use of a filter by depressed compared to healthy participants, orange bars indicate the reverse.Alldata model results are displayed, similar results were observed for the Prediagnosis model, see SI Appendix XI.
Parameters used: scale_factors = [1.05,1.4], min_neighbors = 4, min_size = (20px,20px) Algorithm accuracy was assessed by manually coding a random sample of 400 photos (100 photos from each of combination of depressed/healthy, detected/undetected). Detection accuracy was roughly equal across groups: Face detection accuracy: Depressed, No face detected: 77% accurate Healthy, No face detected: 79% accurate Depressed, 1+ faces detected: 59% accurate Healthy, 1+ faces detected: 61% accurate , the Bayes factor is computed as the ratio , θ θ a b D A positivevalued Bayes factor supports model over Jeffreys (46) established the M a .M b following key for interpreting in terms of evidence for as the stronger model:

V
Fig. S1.Trace and density plots for Alldata model MCMC simulations.

Fig. S2 .
Fig. S2.Autocorrelation plot for Alldata model MCMC simulations.First chain only is displayed for conciseness (second chain output is nearly identical).

Fig. S4 .
Fig. S4.Autocorrelation plot for Prediagnosis model MCMC simulations.First chain only is displayed for conciseness (second chain output is nearly identical).

Fig. S9 .
Fig. S9.Instagram filter usage among depressed and healthy participants.Bars indicate difference between observed and expected usage frequencies, based on a Chisquared analysis of independence.Blue bars indicate disproportionate use of a filter by depressed compared to healthy participants, orange bars indicate the reverse.Prediagnosis model results are displayed, similar results were observed for the Alldata model, see main text Figure 3.

Table 1 .
Comparison of accuracy metrics for Alldata and Prediagnosis model predictions.General practitioners' diagnostic accuracy from Mitchell, Vaze, & Rao (25) (MVR) is included for comparison.

Table S2 .
Logistic regression output for Alldata model1 (N=24,713).HPD Level = Highest Posterior Density Level, the probability that a regression coefficient falls within the given HPD Interval.HPD Levels listed are highest probabilities with which it can be claimed that a coefficient's HPD Interval excludes zero.

Table S3 .
(49)stic regression output for Prediagnosis model ( N=18,513).HPD Level = Highest Posterior Density Level, the probability that a regression coefficient falls within the given HPD Interval.HPD Levels listed are highest probabilities with which it can be claimed that a coefficient's HPD Interval excludes zero.A posterior predictive check showed that Alldata observations replicated from the joint posterior distribution consistently overestimated the proportion of depressed observations (replicated: 53.5% depressed; original: 30.9%), with a pvalue of 1.0 .Prediagnosis observations sampled 8 from the joint posterior distribution slightly underestimated the proportion of depressed observations (replicated: 30.02% depressed; original: 37.97%), with a posterior predictive pvalue of 0.039.Gelman et al.(49)suggested that a model with good replication accuracy should generate posterior predictive pvalues within the range of 0.050.95.Note that an extreme posterior predictive pvalue does not mean that a model is wrong, just that it fails to be "right enough" to render a reasonable replication of its input.All models nevertheless far outperformed a simple null model in the capacity to correctly predict class membership.

Table S4 .
Logistic regression output for Ratings model (N=8,976).HPD Level = Highest Posterior Density Level, the probability that a regression coefficient falls within the given HPD Interval.HPD Levels listed are highest probabilities with which it can be claimed that a coefficient's HPD Interval excludes zero.A posterior predictive check of Ratings model showed that sample observations replicated from the joint posterior distribution accurately represented the true proportion of depressed observations (replicated: 44.2% depressed; original: 43.9%), with a posterior predictive pvalue of 0.516.VIII.Instagram filter examplesFig.S8.Examples of Inkwell and Valencia Instagram filters.Inkwell converts color photos to blackandwhite, Valencia lightens tint.Depressed participants most favored Inkwell compared to healthy participants, Healthy participants most favored Valencia compared to depressed participants.

Table S7 .
Pearson's productmoment correlation scores for Ratings model features (columns) with ratings and computational features (rows).