Topic: team

Posts related to the Texas Justice Initiative team

  • Join Our Team

    TJI functions thanks to talented volunteers who work collaboratively to build and maintain our website and data tools. Our team meets weekly – remotely and, when volunteers in Austin are comfortable doing so again, in-person – and keep in touch during the week on Slack and email. We are lucky to have consistently had amazing volunteers, and we are looking to add a few people to the mix.

    Our volunteers have a variety of skillsets, backgrounds and levels of experience. Frequently, we learn from and teach each other during our weekly meetings and in one-off conversations. We'd love to welcome folks who can work on existing projects like automating our data collection processes, incorporating maps into our existing data sets and applying our new style guide to our use of colors in existing charts, and we're also looking for people to fill these new roles:

    • data visualization designer/architect for new project that incorporates several sets of data regarding populations in county jails over time;

    • data architect to coordinate with corporate volunteers on automation tool;

    • SEO and Google Analytics whiz

    Does any of this sound like you? If so, please fill out this form, where you'll also find the fine print on our stack.

  • Using Markdown and GitHub for Blogging - Part 1

    Our TJI volunteer team has grown over the last year. We have new volunteers who are working on various projects. With this growth, I've started thinking about a blog where volunteers like myself can document and publish our work easily and efficiently.

    Blogging is useful in many ways. First, it highlights a volunteer's work in their own words and gives them credit. This process enables the continued professional development of team members who are interested in learning new things. Second, it allows us to communicate with various readers such as tech workers, policy makers, social scientists, community members, and so on. This way, we can help other non-profits like us as well. Finally, similar to our existing workflow, drafts can be reviewed by our team members before publishing so that our writings are robust and knowledge transfer happens naturally during the process.

    Blogging in markup languages

    We discussed how to formalize a publication and review process. Previously, we've used Google Docs to write, edit, and review our drafts. Our review process has been pretty standard; reviewers commented on a certain part of a draft, which was shown in the margin.

    Instead of Google Docs, I suggested using markup languages such as Markdown or reStructuredText, which are commonly used in software development. Compared with existing word processor software such as MS Word or Google Docs, using markup languages can be somewhat challenging to first-time users. You need to learn the syntax and sometimes a bit of html and css knowledge is also required to get the style right.

    However, I think these benefits outweigh those challenges:

    • It is possible to use a version control tool to track changes easily.
    • Every styling is written explicitly (e.g., **<text>** for bold text) and thus easily discoverable.
    • It is easy to include code snippets in the post and they are automatically rendered in a standardized way.
    • It is easy and fast to publish drafts online.
    • With a document builder, we can convert a draft into various printable formats.

    Reviewing blog posts on GitHub

    Once blog posts are written in markup languages, we can use a version control system such as git to track changes. This makes the blog review process similar to code review in software development. Then it's possible for our volunteers to collaborate on a blog post through GitHub.

    Some of you may feel skeptical about this approach because you might know GitHub primarily as a platform for code repositories. I understand this perspective, because that's exactly how I felt when I was asked to submit a manuscript as a ReStructuredText document at the Scientific Computing with Python (SciPy) conference in 2019. However, after going through the process from both ends – as an author in 2019 and a reviewer last year – I've come to really enjoy the process.

    At the SciPy 2019 conference, I submitted my manuscript as a pull request to their proceedings repository. I wrote a manuscript as a ReStructuredText file and attached figures as separate files, and two reviewers reviewed my manuscript.

    Last year, I was on the opposite end of the process and reviewed a paper that was submitted as a pull request.

    Based on these experiences, I found the following benefits:

    1. Exact exchanges between the original and the revised are easily discoverable. This is useful especially if a manuscript goes through multiple revisions.
    2. Switching to a previous version of the manuscript is seamless. It's easy go back and forth between different versions of manuscripts and this prevents people from creating multiple copies (files) of the manuscript.
    3. Communications and decision-making processes are tracked with the manuscript. MS Word and Google Doc don't provide much physical space for lengthy communications. Thus you often need to accept previous comments to remove the visual clutter and then those comments disappear. You can use email but then your communication exists separately.
    4. Non-authors can't edit the manuscript without authors' permission. Only authors can make changes to the manuscript. This doesn't mean that they have all the power though because reviewers' approval is needed for publication (i.e., pull request merged).
    5. Group communication is available. Normally reviewers don't talk to each other but comments can make a group communication easier. This way, we can have a discussion and develop better ideas together. It also prevents reviewers from being siloed.
    6. Review process is transparent. Because of GitHub's great traceability, public repositories like ours will allow others to see our review process including the communication history.
    7. Reviewing technical material is easy. It's easy to incorporate code snippets, hyperlinks, etc. and they are all visible in the manuscript.
    8. Everything exists in one place. Normally non-document type files exist in a different place. However, this way, we can keep our blog post, documentation, and our code all in the same place.

    How it Works


    Write a blog post in a markup language by using any text editors. Once done, submit the manuscript for review as a pull request. At the pull request page, assign reviewers. To write a post in a markup language, you have to know the syntax. To submit it as pull request, you have to have a bit of knowledge on how to use git.


    In a pull request page, you can check who are assigned as reviewers. If you are assigned, you will see your ID. GitHub Docs has detailed information on how to review changes. Keep in mind that even though you will have to read the manuscript in a markup language format, you can still check its rendered version by clicking "View File" option in the menu.

    From Pull Request to Merge

    Once the usual back-and-forth review process begins, reviewers make comments and requests for changes, and authors either accept them or make rebuttal. This process will be documented through comments and every piece of communication is tracked. Once everyone is satisfied, we make a collective decision to merge the pull request and it becomes a part of the master branch. Once this is done, we have several options. We can use several existing services to publish our repository directly as a website, or do something more sophisticated.

  • This is How We Watchdog

    We frequently mention that we “watchdog” the data that we work with as much as possible, but I thought I’d peel that back a bit. What do TJI’s oversight efforts look like?

    Each month, I get new data from the Texas Office of the Attorney General reflecting the previous month's reported deaths in custody and shootings of and by law enforcement officers. I add this new data to TJI's main data sets. Overnight, bots look for changes to the data sets, which are then pushed through our processing pipeline.

    The data-entry process is still quite manual, and I often find errors in the submitted reports – everything from misspellings and wrong names to transposed dates of death that result in negative ages and narratives obviously pulled from a different report. I note these, as well as missing reports – reports on shootings that were the subject of media coverage but were not filed within the required 30-day time period. I recently went through this process and wanted to document and narrate what this effort looks like.

    First observation: There were quite a few custodial deaths in November – 144 deaths reported in that month alone, when we usually see around 90. Next, I noted a few missing custodial death reports:

    Then, I noted that there were two reports filed by separate people at the Texas Department of Criminal Justice for the 11/15 death of Omar Rojas:

    And finally, the officer-involved shooting report for Reginald Alexander Jr. had been submitted by Dallas police, though media reported that he was shot by officers from another agency.

    For each of these inconsistencies, I emailed the person at the agency responsible for filing the report and pointed out what I'd found. A couple immediately responded. One had misunderstood the law and filed the missing report that day, and another one said:

    Hello Ma'am The officers involved do not work for the Dallas Police Department.

    They are employed by the Dallas County community College system. They were unclear on whether or not they were set up to input information into the database.

    I did not see a way to include in the drop down menu.

    I will add the custodial death report because I do not believe that (they) have done so.

    (I replied and advised that he run that by the OAG to be sure).

    My best hope is that the agencies amend/file the reports, but if they don't, my only recourse is to point out to the Office of the Attorney General that the reports are missing. Even then, the OAG doesn't really have to take any action – they are merely a repository for those reports. Not filing a custodial death report is a Class B misdemeanor, states a law passed in 1983, though the punishment has never been used.

    My approach, although toothless, often works. In 2020, agencies submitted 21 custodial death reports after I pointed out that the report was missing. But agencies can simply ignore my emails if they so choose, and then all I can do is tattle. It's great that Texas requires the collection of these reports, but watchdogging it can be a challenge, especially when the law has no teeth.

    data oversightteam