Space L Clottey

Atlas Fellowship

Programs My Applications

Space Lutterodt-Clottey

Back to application

Atlas Fellowship Application

7287567077

ID: 7287567077

Round 1 Application

0 of 1 tasks complete

Last edited: Apr 9 2022 06:09 PM (BST)

REVIEWSUBMIT

Deadline: Apr 11 2022 07:59 AM (BST)

Round 1 Application

Draft saved Delete current draft

Round 1 Application

##

Applicant Profile

Phone number (optional)

We’ll use this to send you updates about the status of your application.

Gender (not used for assessment)

Clear

Race/Ethnicity (not used for assessment)

School name

Expected High School Graduation Year

Clear

Country

State/Province

How did you hear about us?

From what organization or person did you hear about Atlas?

##

Assessments

If applicable, what were your scores on the following exams? Where applicable, give your best subscore on each section or an exam. Including test scores in your application is not required, and not submitting scores will not harm your application. If you haven’t taken any of the following tests and your score becomes a deciding factor on your application, we will fund you to take the GRE.

##

PSAT 10/NMSQT (optional)

  Reading & Writing Mathematics
     

##

SAT (optional)

  Reading & Writing Mathematics
     

##

ACT (optional)

  English Reading Mathematics Science
         

##

JEE-Advanced (optional)

  Physics (Marks) Math (Marks) Chemistry (Marks) Overall Rank
         

TOEFL Composite (optional)

Other Standardized Tests (optional)

If you’ve taken other standardized tests (such as the AMC, AIME, Pre-ACT, or GRE), please enter your scores here.

##

Academic Profile

GPA (optional)

Please enter your (unweighted) grade point average and the maximum possible grade point average (e.g. 3.9/4.0). If you don’t have a GPA, leave this blank.

GitHub (optional)

Please add your GitHub or a link to another portfolio.

LinkedIn Profile (optional)

##

Resume

If you already submitted a LinkedIn profile, or don’t have a resume, leave this blank.

  1. .pdf

##

Activities

If you did not submit a LinkedIn or resume, please describe, in no more than 50 words each, the three extracurricular activites that you have been most committed to or are most meaningful for you.

##

Free Response

These questions are intended to give us the best picture possible of you as a person and an applicant. We’re fine with rough answers—we don’t expect your application will benefit much from spending more than 30 mins on this section. Please complete all questions.

What important issue do you disagree on with most of your friends and why? (2–5 sentences)

What’s something you think is interesting or cool that you’ve done? (1-3 sentences)

(E.g., wrote a book, learned Colemak, organized a political campaign, came up with a novel take on a scientific, philosophical, or societal issue, broke a hash function, started a YouTube channel).

Note: We’re interested in admitting people to the program who are taking the initiative to change the world around them. It’s okay if your answer isn’t impressive by mainstream standards; we are aware that people have had different access to opportunities.

List three books, blog articles, movies, podcast episodes, or other media that have substantially influenced your worldview or thinking.

Characters entered: 59 Min: 0 Max: 100

What do you think is the most pressing problem facing humanity today and why? (2–5 sentences)

If you were given a billion US dollars, what would you do with it and why? (2–5 sentences)

Assume that you’re unable to hire advisors.

##

Please read On Caring, a piece by Nate Soares about scope-sensitivity and doing good.

mindingourway.com/on-caring

If the above link doesn’t work for any reason, feel free to use this one.

What is one decision, not mentioned in the post, that a scope-sensitive person might make differently from a scope-insensitive person in the real world? (2–5 sentences)

##

Quantitative Reasoning

##

Question 1

img

If I live on the island in the middle, is it possible for me to cross all seven bridges (represented by red lines) exactly once and return home?

Clear

For any map like this with islands and bridges, which attributes must the map have such that I could always return home after crossing every bridge exactly once?

##

Question 2

##

Event A has a 90% chance of occurring. Event B has a 20% chance of occurring. The correlation (i.e. whether they tend to occur together, or separately, or are unrelated) between the events is unknown.

What is the maximum probability that both event A and event B will occur?

%

What is the minimum probability that both event A and event B will occur?

%

##

Logical Reasoning

##

Question 1

Historians have commonly believed that paint used before the year 1500 did not contain copper. However, lab techniques have shown that copper is present in the paint of both the Mona Lisa painted by Leonardo da Vinci (1452-1519), a widely-renowned and timeless piece that is one of the most valuable pieces in the Louvre today, and in that of another painting known as the Sine Nomine, also from the same time period, whose painter is unknown—but not found in the paint of any other Renaissance painting analyzed. This is strong evidence that the Sine Nomine was painted by da Vinci, as well as evidence that the presence of copper in the paint of a recently resurfaced map by Fra Mauro, ostensibly from the year 1450, cannot be used as an argument against the map’s authenticity.

The reasoning in the passage is vulnerable to criticism on the grounds that

Clear

##

Question 2

Anne is 35 years old, Bob is 24 years old, Charlie has feature A, and Daniel doesn’t have feature A. You’re allowed to ask people how old they are and whether they have feature A. You want to conclusively test the hypothesis “among these four people, those above age 30 definitely have feature A”.

What’s the minimum number of people you have to ask?

Clear

Which people do you ask?

— —

What important issue do you disagree on with most of your friends and why? (2–5 sentences)

Some of my friends who are interested in changing the world do not often talk about changing compulsory education as a cause area in and of itself. I think there are a lot of cognitive biases at play that make compulsory education an easier pill to swallow once you’re out of it, a pill I think I am slowly digesting the further away the memories of my school become.

I also disagree with some of my friends about climate change societies being a useful expenditure of time at school due to the array of more pressing issues, however I don’t have robust arguments surrounding these myself.

What’s something you think is interesting or cool that you’ve done? (1-3 sentences)

(E.g., wrote a book, learned Colemak, organized a political campaign, came up with a novel take on a scientific, philosophical, or societal issue, broke a hash function, started a YouTube channel).

Note: We’re interested in admitting people to the program who are taking the initiative to change the world around them. It’s okay if your answer isn’t impressive by mainstream standards; we are aware that people have had different access to opportunities.

I co-founded a movement (called End School Slavery) for burning down the global system of compulsory schooling, which we saw as highly unethical. I created a twitter account that posted quotes from writings on compulsory education, as well as planned out a site that would incorporate everything. The site in it’s current form exists at endschoolslavery.brick.do. Also got over 2,000 views on my essay about compulsory schooling (lesswrong.com/thetruesquidward) that was contentious on Less Wrong with over 76 comments. I also did an interview on a podcast for this, as well as created a web app exploring an alternate learning method through video (increview.app).

What do you think is the most pressing problem facing humanity today and why? (2–5 sentences)

I used to think it was unethical schooling, as it impacted millions, if not billions of people, and the people effected had no vote (as by the time they were adults, rationalisation and other factors like compression of memories involving boredom are at play such that they no longer take the suffering they experienced in childhood anywhere near as seriously).

After reading the works of Jason Hickel, I then thought that unequal policy and power wielded by rich countries against the poor was one of the most pressing problems, as his book makes the case for why these things cause the cycle of poverty and keep it there.

But I now think that unaligned AI is the most important problem. The arguments for why it is both extremely likely to be created in the next few decades are robust, including the arguments for why they’re extremely difficult to stop once they’re created and begin recursively self improving.

If you were given a billion US dollars, what would you do with it and why? (2–5 sentences)

I would spend a sizeable fraction on human augmentation; I would fund a trustworthy entrepreneur to iterate on current nootropics and find something more potent and sustainable. But also fund other research into human intelligence augmentation. Though it is a high risk operation keeping something like this secret, I would make an effort to keep the fruits of these down low and only provide them to promising safety research organisations, and not be widespread, as that would increase the risk of AI.

I would also fund the personal development of a promising individual to run for president of the United States in order to cause dramatic changes in policy work, including around near termism causes like democratising the world bank.

Please read On Caring, a piece by Nate Soares about scope-sensitivity and doing good.

A scope-sensitive person would generally orient themselves towards working on one of the most important problems during their life, or the ones that effect the most amount of sentient beings for the most important time.

For example, if originally they believe that stopping compulsory education is one of the most important cause areas, they may stop optimising solely for that and spend more time exploring and skill building in order to do more impactful work that effects more people later in their lives.

— — — —

Long Termism application form

Suppose it is 2024/2025 (or whatever year you will finish your degree), you are about to graduate from university, and you are thinking about what to do next. How would you reason about this question, and what are some possible paths that seem potentially compelling to you today?

Note that we fully appreciate that as a prospective undergraduate student, it’s very difficult to anticipate how your interests and opportunities will evolve over the coming years. We are therefore not asking you to make a prediction about what sort of paths you will actually choose to pursue following your graduation. Instead, we are interested in learning more about how you would reason about this sort of question, i.e. what sorts of factors you expect you would consider, as well as what paths seem most interesting to you right now.

First, I would aim to get a strong sense of personal fit for any careers. I would consider the degree I have just completed, and reflect on how much I enjoyed the subject matter. Then I’d look at the internships / workshops / other things I have done over the past five years and reflect on how much I enjoyed them.

If it turned out that I didn’t actually learn much from my degree (and that I’m not particularly well skilled after all) then, depending on the state of the world, it is possible I spend some time skilling up in an organisation unrelated to EA.

If I am in a position where I am unhappy, I will prioritise that and attempt to move to wherever my friends are in the world in highest concentration, and focus on just resting and enjoying the positive social opportunities in the meanwhile.

One appealing path after university is creating a startup, potentially web development related. I already have proven interest and skill in web development and I find it deeply rewarding and fun. Doing software for an EA org is a possibility that would be personally rewarding, however doing more direct work would be preferable.

Travelling, in order to get a deeper emotional sense of the variety inherent in the world and what’s at stake if it all is destroyed would also be high value for those reasons, however this is not something that has to wait until after university.

On the chance that AI Alignment is now significantly more legible to me (seemingly more likely in the world where I major in Computer Science over Economics) it would be a matter of urgency to get into an alignment organization and start making useful contributions.

In the case I major in Economics, it is likely that policy work, though already largely appealing, is a lot more legible and tractable me so applying to jobs in these fields is also a potential option.

Suppose you wanted to use your future career to solve one important problem and/or advance one important cause. If you would have to pick one such cause/problem today, what would it be and why?

Assuming personal fit, I would choose AI Alignment as a cause area. I have been familiar with the arguments for a long time, however only now am I beginning to grok on an emotional level the likelihood that it does cause the end of humanity, and likely far more. This was through a combination of reading MIRI’s “Death with Dignity” post, hearing Nate Soares discuss these ideas in person, as well as digesting the arguments for AI safety in a swifter and more condensed form, all in a rather short time frame.

However, it seems unlikely that working in AI Alignment has good personal fit for me. As it stands, I find I do not enjoy work with slower feedback cycles compared to work with faster feedback. Additionally, though I will be diving into machine learning soon, and computer science is one of my favourite subjects, I have no proven interest thus far of interest in machine learning.

However, there is a chance these factors could change.

It is my understanding that risks surrounding biosecurity are one of the highest downsides available.

I expect I have decently high personal fit with such a field, as I find retrospectives on policies and their effects to be highly intersting. I also know this problem to be highly tractable, meaning faster feedback loops meaning it’s more fun to work on.

What book?

HPMOR

Harry Potter and the Methods of Rationality was the first time I encountered a protagonist who took the idea of suffering incredibly seriously. I found his repulsion at the cruelty of prisons, as well as his total defiance of death, really motivating. I was familiar with Nick Bostrom’s Fable of the Dragon Tyrant prior to reading HPMOR, so the ideas weren’t new to me, but seeing them taken so seriously was really inspirational.

Doing Good Better

I was already familiar with the ideas of Effective Altruism prior to reading Doing Good Better, but as a prerequisite to the idea of starting an EA Club at my high school, I thought it would be a good idea to become a lot more familiar with the ideas at their source. My primary emotional takeaway from the book is the immense difference possible in funding between different causes. However, more than that, Macaskill often goes from describing some horrific infliction of human suffering, and then describing another one that’s just as bad but cheaper to solve by a significant factor. This made it click in my head how many different shades of deplorable suffering there are in the world and how with every fight that you decide to take on, you make a massive tradeoff between what other causes you aren’t doing anything about, but that that tradeoff necessarily needs to exist.

Unaligned AGI

I was first introduced to the ideas of the existential risks involved with Unaligned AGI in Eliezer Yudkowsky’s Rationality: from AI to Zombies. The arguments for why they would not have human values by default and why they would have every incentive to deceive and take control have been clear to me for a while. However, I had a reaction to it that I did not endorse, which was a foundational feeling of safety and trust in Eliezer Yudkowsky to create an aligned AI.

His recent publishing of MIRI’s new “Death With Dignity” strategy (their acceptance that the alignment problem will not be solved in time), as well as meeting Nate Soares in person (another person I had a deep trust in) and hearing him rehash these ideas, as well as outside of that having the arguments for AI Safety rehashed to me in greater detail and far more succintly than the sequences, made me understand on an even deeper level how likely it is that AI will destroy the world shortly. This helped me grok why so much funding was going to alignment, and how it was the most important, most urgent problem to solve.

— — — — —

EA Summer Communications Fellowship Application

When: July 1, 2022 → August 31, 2022 If you can’t make it for the entire fellowship, please still apply and let us know your availability below. We expect fellows to be available for the start of the fellowship, but understand that some schools and other programs begin in mid-to-late August and can be flexible. Where: The Bay Area Deadline to Apply: Applications accepted on a rolling basis through April 25th, 2022 For more information, go to our website at https://tinyurl.com/EACommsFellowship or contact Zeke Reffe-Hogan at zekesimon@gmail.com

First Name *

Last Name *

Email *

What stage of your career or education are you in?

Please elaborate if you selected “other”

LinkedIn/CV

Please add a link to your LinkedIn profile or other relevant online CVs/résumés (e.g., GitHub, personal website etc). Alternatively, attach a file below.

https://spacelutt.com/projects (this page largely covers my more artsy projects)

Linkedin/CV

Attach a file here if applicable:

Attach file

Space Lutterodt-Clottey Resume March 2022.pdf

If you’re not available for the entirety of the fellowship, when would you need to leave?

(optional)

What is your background with Effective Altruism? *

Please limit your response to 100 words.

I found EA through the rationality community. I’ve attended three EAG conferences, and visited the Lightcone and Constellation offices while in the Bay Area for the Atlas Fellowship Beta Program. I have a lot of friends in the EA community.

Communications is a field that encompasses many career paths. Which comms field(s) are you most interested in pursuing? Tell us about any relevant experience you have with this field or other comms related skills or experience. *

Examples of communications career paths include journalism, community building, policy comms, art and entertainment, writing, marketing, fundraising, and more. If any career path that interests you and/or that you have experience in could help in communicating about EA to people outside the movement, please include it in your answer. Please limit your response to 150 words or less.

I am highly interested in the art and entertainment and writing fields, and interested in learning more about the journalism field. I have a lot of experience writing, having been writing fiction and non-fiction online since I was twelve on multiple blogs (https://gingerjumble.wordpress.com / https://squid.brick.do / https://spacelutt.com). I won regionals for the Jack Petchey Speak Out Competition in 2020 for my speech “I Want You To Be Happier” (https://www.youtube.com/watch?v=eWM_1WlWxjc) I also have a podcast feed where I’ve done interviews, discussion of media, and narration of short stories (https://anchor.fm/daylightismine/) and one podcast episode called “Quote Therapy” that was nominated for the BBC Young Audio Awards in the Rising Talent category (https://dukeboxradio.com/podcast/quote-therapy/).

I am also very interested in web development, and made my personal site (https://spacelutt.com) as well as a web app for incrementally watching videos (https://increview.app).

In terms of video, I’m very interested in film making and editing, and outside of currently being on the writing and editing teams for a production at my school, I make animations and hand-drawn animatics.

Why are you interested in attending this fellowship? *

Please limit your response to 150 words or less.

I think giving people who otherwise would have been doing only altruistic work a community and recources with which to make it highly effective is really important and can significantly boost the impact had, but can’t happen if people don’t hear about EA, or hear poor, unrepresentative, uninteresting messages describing it.

I want to learn more about how the things I find naturally find really enjoyable regarding media creation can be used productively in industry, outside of the intrinsic enjoyment working on them provides. This would also help me get a sense of personal fit for communications work, and would help for future comparisons for which career I am likely to have highest impact in.

I also think it would be very rewarding spending time with other people who are interested in arts and media and creation.

If you could work on any EA communications project, what would it be and how would you approach executing it? *

Projects can include anything related to communicating to people outside EA about the movement or a particular EA cause – anything from starting an organization, to writing articles, to writing a book or making a movie. Note: It is unlikely–but not impossible–that you will work on this particular project this summer. Most projects will be ones mentors are already working on and which have a particularly high likelihood of making a positive impact. See the “How is the fellowship structured?” section of this page for examples of potential projects: https://tinyurl.com/EACommsFellowship Please limit your response to 200 words or less.

I would create a highly edited animated video that conveys the basics of EA, to the style of fireship video describing web development topics (https://youtu.be/U3aXWizDbQ4). These videos are very entertaining to watch and the use of a relaxed narrator with high speed, smooth, well transitioned animations makes it extremely entertaining and easy to understand while watching.

I would start by writing a script for the video. Potentially via analysing the structure of a script in a fireship video (such as how long to spend on the intro, what’s covered in how much time). Then, I would write a script for the video, heavily adapting from pre-existing, optimised-for-clarity definitions of EA. I would then watch tutorial videos on all the features that already exist in AfterEffects. As I am putting together the video in Premier and AfterEffects, I would adjust the script, removing words where graphs/images/short videos would do, to adapt it fully for an audiovisual instead of purely text experience.

**Give an example of a piece of Effective Altruist content (an article, a website, an argument from an EA author, a video, etc.) that you think should be improved, and tell us how you would improve it. *

Please limit your response to 250 words or less.**

A small example is Peter Singers “drowning child” argument, where he argues that you have a moral obligation to save a child who you see is drowning. It would be more appropriate to rephrase this as whether you would want to save the child who is drowning, as in a sense there is no such thing as a moral obligation (something that is spoken about extensively in Replacing Guilt

A different example is the website for the Atlas Fellowship application (https://atlasfellowship.org). I would increase the size of the text under the “What” header, and decrease the amount of text, increasing the amount of spacing between the paragraphs to make it an easier and more pleasant experience to read. I would also make the logo in the upper left corner a lot bigger, and radically increase the shadow behind the white text in front of the illustration to make it more readable.

Alternatively, regarding the post “Simply EA Pitches to ‘Holy Shit, X-Risk” (https://forum.effectivealtruism.org/posts/rFpfW2ndHSX7ERWLH/simplify-ea-pitches-to-holy-shit-x-risk) it has many good ideas but I would make it radically shorter. Though the conversational tone can be a part of the style, the length of the post is daunting and I would remove a lot of the fluff surrounding the ideas themselves. For example, under caveats, many of the bullet points have sub-bullet points beneath them, with information that can be condensed into one expansion as opposed to two.

**Imagine you’re talking to someone who’s new to EA. How would you approach explaining EA to them? *

Please limit your response to 150 words or less.**

I would say that EA places a focus on doing the most good with the available recources, and use the major example of some charities being significantly more effective than other charities, and that EA has a big focus on finding those charities and making sure those are the ones receiving funding.

I would then explain the basics of longtermism, conveying how some believe that even though there are many really bad things happening here now, there are potential things in the future that could be even worse, for example a pandemic worse than Covid. Therefore a large amount of EA is involving decreasing the risks to humanity in the future.

References (optional)

References are totally optional, but can help us learn more about you. Please don’t let lack of references or waiting to contact potential references prevent you from applying. If you want, give 1-3 references, along with their contact information. References by people who are directly involved in Effective Altruism and adjacent communities are particularly useful, though people who can speak to your abilities and who you are as a person more generally are also helpful for us.

Sydney von Arx - sydney@atlasfellowship.org / +1 503-462-5740
Kyle Scott - kyle@alignment.org / +1 925-817-7188

Is there anything else you’d like us to know?

Are you interested in having us share your application with potential employers/relevant organizations/experts who might be interested in collaborating with, advising, or hiring you? *

Never submit passwords through Airtable forms. 

Report malicious form

Airtable