Active projects and challenges as of 21.12.2024 12:45.
Hide full text Print Download CSV Data Package
#asktj
Ask Ti Jean
Data safety on the Road
During the Rethink Journalism hackathon in Basel, we brainstormed how to help participants of similar future events to get some help with Information Security, and receive recommendations of materials - as well as more targeted advice about computing safety.
Photo credits: Kitty in the city
💸 Donate to digitale-gesellschaft.ch who presented a position paper and related news at #dinacon22
See also:
https://github.com/enaqx/awesome-pentest
Background
At #rejoha22 we discussed concerns about journalists being targets of cyber-infiltration. We started an open pad, in which we suggest developing a toolkit: so that, even with minimal preparation, the infosec resilience of your community can be slightly improved - one hackathon at a time. The idea was further developed at the DINAcon HACKnight and other community meetings.
We originally called the project "Ask Jack", but out of respect for Jack Schofield, the Guardian’s former computer editor and long-running author of the Ask Jack advice column, we have renamed it ASK TI JEAN. Why? Ti Jean (little Jean) was a childhood nickname of Jack Kerouak - a great American novelist of French-Canadian ancestry, who was born 100 years ago and had aspirations to be a journalist. One of his most famous works is On the Road, which happens to share acronyms with Off the Record (OTR), a cryptographic protocol worth knowing.
Man in the Middle
Is not Worried
He knows his Karma
Is not buried
But his Karma,
Unknown to him,
May end —
-- Mexico City Blues by Jack Kerouak
Ask Ti Jean
Data safety on the Road
(A humble manifesto for better hackathons)
Journalists, freelancers, designers, everyday citizens - Demand more security in our digital life! Doubly so for people who are involved in critical investigation: all those, whose digital rights and identities may for various reasons be threatened.
Opportunities to quickly or inexpensively receive a security checkup from a professional may still be rare: look for an organization in your area that has a strong policy and support options. Ask them to host a community meeting, workshop or hackathon.
Diving into the world of cybersecurity for the first time can be overwhelming or intimidating if you do not get clear, direct, actionable advice from a trusted source. A network lets you get help from people for a reasonable price - or for other clearly stipulated motivations, like a volunteering certificate. But how does one find, and connect, to such a network?
~ THEREFORE ~
We propose ASK TI JEAN, is an Information Security (infosec) check-up and knowledge exchange that every community event should have as part of its deployment kit.
Specifically, we suggest:
- An informative poster designed to convey a practical and attractive Way to Infosec-up;
- Hand out pamphlets or booklets from reliable and current sources of user support;
- Link to online tutorials and Q&A forums where further help is available;
- Involve cyber-stretching in your routine, with simple exercises like changing passwords and checking firewall logs;
- Provision an ad-hoc network security tool for a background vulnerability assessment, explaining to your audience how it works.
Since having a dedicated on-site expert can be difficult or expensive to organize, at least have someone take some time to obtain and read up on material and prepare some tools. In the following sections, we are gathering some starting points.
Please 🙏 contribute to this hackpad using the open access comments, or edit after logging in.
Literature
What do we need to read and understand to develop this idea further?
- Ratgeber von Digiges/WOZ/CCCCH
- Wikipedia - Cyber Self-Defence
- GIJN Cybersecurity Assessment (Info)
- Hacking book by Dominik Landwehr (2014)
- OKFN - eGovernment und Sicherheit: eine schwierige Beziehung (2016)
- AWK - Cyber-Resilienz im öffentlichen Verwaltung (2020)
- Heise - Bund will im IT niemandem mehr vertrauen (2022)
Learning & Sharing
Good starting points, tools and services to offer and build on. Examples of projects that recommend security-related tools and practices.
- Kali Linux
- ProtonVPN
- Key signing parties
- Journalist Toolbox
- SpyPi - Students for data security
- Ransomware Prevention Kit
- E-Sec 3D learning courses
- Infosec AG - eLearning Courses
Communities
Where could you reach out to find people to contribute to such a toolkit
- Starship Factory (Basel)
- CoSin (Biel/Bienne)
- Eastermundigen (Bern)
- Bitwäscherei (Zürich)
- Moar Hackerspaces ..!
- CH Open / Open Education Days
- Chaos Computer Club Schweiz
- Linux User Group Schweiz
- SwissDevJobs
- FreelancerMap
Random
- Construction site safety poster vs. School of Data toolbox and poster
- Infosec posters (DuckDuckGo)
- Cybersecurity illustrations (heartbeat.ua on Dribble)
Contribute
This text was created by lo & lo
during Rethink Journalism hackathon in Basel on November 26, 2022. We started an open pad, in which we suggest developing a toolkit: so that, even with minimal preparation, the infosec resilience of your community can be slightly improved - one hackathon at a time. The idea was further developed at the DINAcon HACKnight and other community meetings.
We originally called the project "Ask Jack", but out of respect for Jack Schofield, the Guardian’s former computer editor and long-running author of the Ask Jack advice column, we have renamed it ASK TI JEAN. Why? Ti Jean (little Jean) was a childhood nickname of Jack Kerouak - a great American novelist of French-Canadian ancestry, who was born 100 years ago and had aspirations to be a journalist. One of his most famous works is On the Road, which happens to share acronyms with Off the Record (OTR), a cryptographic protocol worth knowing.
🖇️ Was Jack Kerouac really a hack? (2012)
Jack Kerouac with a cat, 1965 - Photography by Jerry Bauer
License
Licensed Creative Commons CC0 - Public Domain
crab.fit
Enter your availability to find a time that works for everyone!
We are using crab.fit to plan our DINAcon HACKnight meetups this year. It takes an interesting approach to collect personal preference in a data-protection-conformant way, and presents an original interface to enter and visualize group preferences. Despite being a small project, we found it reliable and useful for planning the HACKnight.
Donate: PayPal
{ hacknight challenges }
What tools do you use for planning meetings for your open initiative? Learn about some of the open source alternatives. Set up a plan for your next code jam, footy match, or Jass night at https://crab.fit
Consider setting up your own crab.fit instance on a free Google Cloud trial. Let the developers know how straightforward the process is, if anything is lacking in the documentation. Become a translator or make a donation.
See the instructions below to deploy and contribute to the open source project. Check out for example issue #143 that we are already discussing with crab.fit devs.
Crab Fit
Align your schedules to find the perfect time that works for everyone. Licensed under the GNU GPLv3.
Contributing
⭐️ Bugs or feature requests
If you find any bugs or have a feature request, please create an issue by clicking here.
🌐 Translations
If you speak a language other than English and you want to help translate Crab Fit, fill out this form: https://forms.gle/azz1yGqhpLUka45S9
Setup
- Clone the repo.
- Run
yarn
in both backend and frontend folders. - Run
yarn dev
in the backend folder to start the API. Note: you will need a google cloud app set up with datastore enabled and set yourGOOGLE_APPLICATION_CREDENTIALS
environment variable to your service key path. - Run
yarn dev
in the frontend folder to start the frontend.
🔌 Browser extension
The browser extension in crabfit-browser-extension
can be tested by first running the frontend, and changing the iframe url in the extension's popup.html
to match the local Crab Fit. Then it can be loaded as an unpacked extension in Chrome to test.
Deploy
Deployments are managed with GitHub Workflows.
To deploy cron jobs (i.e. monthly cleanup of old events), run gcloud app deploy cron.yaml
.
🔌 Browser extension
Compress everything inside the crabfit-browser-extension
folder and use that zip to deploy using Chrome web store and Mozilla Add-on store.
Data Package as a Service
Make open data small, self-published, and actionable
Open-data-by-default web applications like Flask-based CKAN or dribdat (that runs this site), Django-based SDPP, search engines like OpenSearch, etc., offer full-text search of their content and other APIs as a standard feature. For quickly sharing single datasets or developing 'single page applications' (SPAs) or visualizations, using a large backend application like this may be excessive.
Rationale
I'm supporting Portal.js and Livemark, which accomplish this very well - but sometimes want something even simpler and more integrated in my data stack of Python and Pandas. There are portal previews, linked data endpoints, and wonderful tools like Datasette to dive into a resource, but this might not be ideal for tinkering with data in code. Providing data services through a statically-generated site like JKAN or Datacentral is another cheap and cheerful option. You may already be working on a data science notebook in Jupyter, R Shiny or Observable, but having issues preparing your data on your own.
While working with Frictionless Data (* a global initiative to improve the way quality open data is crowdsourced) - I often wished that there was a way to put a quick API around a Data Package. On top of it, a user interface ..or a data science notebook ..or a natural language interface could be built. The proposed project DaatS is a service in Python and Pandas, which instantly turns a Data Package into an API.
The idea of connecting this project to workflows would be to think of this API-fication of a dataset as a data transformation step, something a user might want to add with a couple of clicks to their data collection in order to benefit from Frictionless Data tools and other components in the open data ecosystem.
Example
You can see the idea in action, combined with Wikidata, as part of a recent project: Living Herbarium (GLAMhack 2022). Another example, involving data scraping automation, is Baumkataster.
{ hacknight challenges }
Create a Data Package. It might be your first or your 99th. It is easy and fun to scrape some data off the web and put some shiny wrapping and "nutritional" guidance around it. Ask @loleg if you need some help here, or see this or this detailed guide.
Use the DaatS template to add a repo with boilerplate code on your GitHub account. Or just download the repository to your local machine. Follow the README to install the packages, and drop in your datapackage.json
and CSV dataset. Use your browser or an API testing tool to run some queries, and you should see it paginating and searching your data.
Write a converter to patch your DaatS service into a no-code workflow. This could be a Proxeus node, a Node-RED, an Airtable or Slack workflow step, a GitHub Action, etc. Whatever would potentially scratch your own itch. Make it super easy for users to connect a dataset and invoke search queries or even statistical / data science functions as embedded in their process.
Data as a (tiny) Service
This is a template repository, which lets you create a quick API around your Frictionless Data Package. This could be useful in several ways: as a microservice for your SPA frontend, for integration with Web-based workflows, for more paginated access to larger datasets, for setting up a cheap and simple Data as a service offering.
The open source code is based on Python and Pandas, and can be easily extended to fit the needs of your data science project.
Getting started
Place a datapackage.json
file and data
folder with your own data to start setting up an API.
If you have not used Data Packages before, an easy way to get started is to convert your dataset to a CSV file (or a set of CSV files), in UTF-8 format - which you can create with any spreadsheet program. Then, use the Data Package CLI or Create Frictionless Data tool to generate a Data Package by clicking the "Load" button and then adding and defining the columns and metadata. "Download" and place the resulting files here. Visit frictionlessdata.io for more advice on this.
Installation
This repository contains a minimalist backend service API based on the Falcon framework and Pandas DataPackage Reader. To run:
cd api
virtualenv env
. env/bin/activate
pip install -Ur requirements.txt
python server.py
(Alternatively: use Pipenv and run pipenv install && pipenv run python server.py
)
At this point you should see the message "Serving on port..."
Soon there will be a webpage where you can test the API. Until then ...
Test the API using a REST client such as RESTer with queries such as:
http://localhost:8000/[my resource name]?[column]=[query]
For instance, if you had a Resource in your Data Package with the name "tree", which has a "quartier" column, you can search for "Oerlikon" in it using:
http://localhost:8000/tree?quartier=Oerlikon
You can adjust the amount of output with a page
and per_page
parameter in your query.
License
This project is licensed by its maintainers under the MIT License.
If you intended to use these data in a public or commercial product, please check the data sources themselves for any specific restrictions.
eBau
Electronic building permit application for swiss cantons
Thanks to the eBau team for presenting a lightning talk and running a stand at DINAcon 2022.
Thanks also to Giger Energy and Solargenossenschaft Region Biel for the process inputs on the design of the eBau portal during a HACKnights meetup.
While the eBau and Caluna projects currently are not fundraising online, we can support the Django REST framework which the project depends on the backend, as well as the frontend framework Ember.js which is on Open Collective.
See also:
https://hacknight.dinacon.ch/project/19
{ hacknight challenges }
Log into the eBau portal of your canton and try to create a draft submission. Get a feeling for the platform and think about whether it helps you to complete a complex application process with the authorities.
Read through the documentation of the project, learn about the software architecture and political process behind it. Contact the team with any suggestions or improvement requests through GitHub or e-mail.
Deploy the project locally, or get started as a developer by installing the Caluma project on your machine. See if there are any open issues you could contribute to, help with a translation, etc., on the open source repository.
Electronic building permit application for Swiss cantons.
Table of Contents
Overview
This repository contains the source code for the web applications used to handle electronic building permits and comparable processes in the Swiss cantons of Berne, Schwyz and Uri.
The following image shows a high-level overview of the architecture:
- The application is composed of various Docker containers, which are shown in light blue in the architecture overview.
- The frontend consists of two Ember.js apps, one for applicants submitting building permit applications ("portal"), and another used by members of the public authorities ("internal area"). The two apps can share code through the Ember Addon
ember-ebau-core
. - The backend is based on Python/Django and exposes a GraphQL API for forms and workflows based on Caluma and set of domain-specific REST endpoints (Django REST Framework).
- PostgreSQL is used as database.
Folder structure
├── compose # docker-compose files
├── db # database Dockerfile and utils
├── django # backend code, containing both API and Caluma
├── document-merge-service # document generation templates and config
├── ember-caluma-portal # Caluma-based portal
├── ember-camac-ng # Ember.js app optimized for embedding in other applications
├── ember-ebau # Ember.js based application for internal area
├── ember-ebau-core # Ember.js addon for code sharing between multiple Ember.js apps
├── keycloak # Keycloak configuration for local development
├── proxy # Nginx configuration for local development
└── tools # miscellaneous utilities
Modules
Due to ongoing modernization work, some Frontend modules are not yet integrated in ember-ebau
, but instead are still part of ember-camac-ng
. Few Frontend modules are not part of this repository yet at all. The following table lists the most important modules in the "internal" part of the application and their respective completeness / integration state (in the demo
configuration).
Module | Description | Backend | Frontend | Part of ember-ebau |
---|---|---|---|---|
Main Nav (resource) | ||||
Dossier list | Show a list of dossiers | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: |
Task list | Show a list of tasks | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: |
Templates | Manage document templates (docx) | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: |
Organization | Manage details of own organization | :heavycheckmark: | :heavycheckmark: | :hourglassflowingsand: |
Static content | Static content, markdown editor | :heavycheckmark: | :hourglassflowingsand: | :hourglassflowingsand: |
Text components | Manage snippets for usage in text fields | :heavycheckmark: | :hourglassflowingsand: | :hourglassflowingsand: |
Subnav (instance resource) | ||||
Tasks | View and manage tasks | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: |
Form | View and edit main form | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: |
Distribution | Get feedback from other organizations | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: |
Alexandria | Document management | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: |
Template | Generate document from template | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: |
Journal | Collaborative notebook | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: |
History | Shows milestones and historical data | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: |
Responsible | Assign responsible users | :heavycheckmark: | :heavycheckmark: | :hourglassflowingsand: |
Audit | Perform structured audit | :heavycheckmark: | :heavycheckmark: | :hourglassflowingsand: |
Publication | Manage publication in newspaper | :heavycheckmark: | :heavycheckmark: | :hourglassflowingsand: |
Audit-Log | Shows form changes | :heavycheckmark: | :hourglassflowingsand: | :hourglassflowingsand: |
Claims | Ask applicant for additional info | :heavycheckmark: | :hourglassflowingsand: | :hourglassflowingsand: |
Requirements
The preferred development environment is based on Docker.
- Docker >= 20.04
- Docker-Compose
For local development:
Python:
- python 3.8
- pyenv/virtualenv
Ember:
- current LTS of Node.js
- yarn
Development
Basic setup
Docker can be used to get eBau up and running quickly. The following script guides you through the setup process. We recommend using the demo
config for now, since it features the highest number of modules in the ember-ebau
app.
make start-dev-env
In case you want to manually modify /etc/hosts following domains need to point to 127.0.0.1 (localhost):
ebau-portal.local ebau.local ebau-keycloak.local ember-ebau.local ebau-rest-portal.local
For automatic checks during commit (formatting, linting) you can setup a git hook with the following commands:
pip install pre-commit
pre-commit install
After, you should be able to use to the following services:
- ember-ebau.local - new main application used for "internal" users
- ebau-portal.local - public-facing portal (Caluma-based, default choice for new projects, used in Kt. BE, UR)
- ebau.local/django/admin/ - Django admin interface
- ebau-keycloak.local/auth - IAM solution
Predefined credentials
The following administrator accounts are present in Keycloak or the DB, respectively:
Application | Role | Username | Password | Notes |
---|---|---|---|---|
demo | Admin | user | user | |
kt_schwyz | Admin | admin | admin | |
Publikation | adsy | adsy | ||
kt_uri | Admin | admin | admin | |
PortalUser | portal | portal | ||
kt_bern | Admin | user | user |
Debugging
For debugging inside container shell, use this:
make debug-django
Working locally with ember
docker-compose up -d --build db django
cd {ember|ember-camac-ng|ember-caluma-portal|ember-ebau} # Enter ember from the top level of the repo
yarn # Install dependencies
yarn test # Run tests
yarn start-proxy # Run dev server with proxy to django api
Yarn workspace
Note however that those two apps ember-caluma-portal
and ember-camac-ng
share the same node modules tree through a yarn workspace.
The common yarn workspace allows us to share code (e.g. addons) between the apps which are part of this repo (instead of following the typical approach of publishing releases on npm). This also means that
- (+) we save some disk space because of the avoided duplication in the
node_modules
directory - (-) the docker build processes of the two frontend containers have to run in the context of the root of the repo, in order to access the shared dependencies during build time
- (-) the ember versions
ember-caluma-portal
andember-camac-ng
need to be kept in sync
Django profiling
To enable django-silk
for profiling, simply add DJANGO_ENABLE_SILK=True
to your django/.env
file. Then restart the django container and browse to
http://ebau.local/api/silk/.
Visual Studio Code
The remote debugger settings for VS Code are committed to the repository.
- The configuration file is located at
.vscode/launch.json
. - The keyboard shortcut to launch the debugger is F5.
- Information on VS Code debugging
To enable debugging in the django container the ptvsd server must be started.
Since this debug server collides with other setups (PyCharm, PyDev) it will
only be started if the env var ENABLE_PTVSD_DEBUGGER
is set to True
in
django/.env
.
GWR API
The GWR module is developed in two separate repositories:
- Frontend: inosca/ember-ebau-gwr
- Backend: inosca/ebau-gwr
If you use the GWR module, you need to generate a Fernet key according to the documentation of the gwr backend.
You need to set this key in each environment/server in your env file. Generate a separate key for each environment, since this is used to store / read the gwr user passwords.
Customize API
The API should be designed in a way, that allows it to be used by any eBau project. For needed customization, the following rules apply:
- each permission may be mapped to a specific role in the specific project. In case a role may have different set of permissions than already available, introduce a new one and adjust the different views accordingly.
- for features which may not be covered by permissions, introduce feature flags.
For different feature flags and permissions, see APPLICATIONS
in settings.py.
Sending email
In development mode, the application is configured to send all email to a Mailhog instance, so unless you specify something else, no email will be sent out from the development environment.
You can access the Mailhog via http://ebau.local/mailhog . Any email sent out will be instantly visible there.
License
This project is licensed under the EUPL-1.2-or-later. See LICENSE for details.
OGC API & STAC
Test a Proof of Concept app from MeteoSwiss
In a joint project of swisstopo and MeteoSwiss, an open interface for accessing ten MeteoSwiss products was provided. Selected data can be freely obtained via STAC-API and OGC API FEATURES for test purposes during the test period. The goal of the prototype is, among other things, to obtain feedback from users via online survey.
https://twitter.com/swiss_geoportal/status/1589889313239887872
The PoC will run until the end of November 2022. We are happy to support open initiatives like this as part of the DINAcon HACKnights and upcoming events like the GovTech Hackathon.
Photo above: OGC Sprint Recap by Chris Holmes.
See also:
https://hacknight.dinacon.ch/project/9
{ hacknight challenges }
Start with the online tutorial (GitHub) and run some example queries in your web browser. Familiarize yourself with the context of this dataset, and log into GitHub so you can ask questions & share feedback.
Learn to use APIs implementing the STAC API Specification with the STAC browser, where you can already connect to geodata repositories around the world. Use the QGIS plugin if you would like to explore the data in an open source desktop GIS tool. See also the QGIS-OAFeat interface in the tutorial.
Use a command line API testing tool like curl, or an advanced GUI like postman to work with servcie endpoints more precisely. At this point you should be ready to try connecting to the API with libraries in your choice of programming language. Your thoughts about the security, performance and reliability of this solution are particularly welcome. Please raise an issue or share your tips in our community chat.
OAPI - POC
Proof of concept (POC) to ingest geospatial datasets from MeteoSwiss into a SpatioTemporal Asset Catalog (STAC) and expose as OGC API Features.
Terms of Service
- This service is experimental and thus not for operational use.
- This service is limited in time, from 1.8.2022 to 19.12.2022 .
- The service has limited availability and limited operating hours: It is frequently rebooted.
- During the limited service period, data can be accessed for testing purposes only. You must provide the source (author, title and link to the dataset).
- Further, when using this service, the disclaimer of the Federal Administration and the respective terms of use must be complied with in every case. You should therefore read the disclaimer carefully: disclaimer.admin.ch.
Documentation
OGC API and STAC are designed to be explorable through links
and a good starting point is the Landing Page (aka Root Catalog)
which links to capabilities, descriptions and fundamental resources of the services.
The OpenAPI
definition can be consumed through the SwaggerUI
. Select the appropriate server and authorization (for endpoints except GET) to try it out.
Be aware that the api definition is not in sync with the service implementation. There are addinonally transactional endpints for Collection
and Feature/Item
resources and the schemas/definitions might diverge from the actual implementation.
Usage
For now the basic use case is uploading a STAC Asset
through the load-asset
process. The input schema describes the json body
of the post
request passed to it's ./execute
endpoint. It requires the file as base64 encoded string, some asset properties, the collection id and the item id or an item object to create.
Example python scripts for loading an asset to an existing collection as well as extracting & creating a collection resource from a geocat.ch
entry are in the scripts folder.
Catalog Trees
Catalog trees can be created by adding collection resources with the property type
set to Catalog
and links with the relations parent
, child
and/or item
. Naturally these relations should be reflected on the linked ressources as well.
Consumption
The created resources can for example be consoumed with the STAC Browser. The assets contents accessible through the href
reside on a S3 bucket.
Tutorial
A TUTORIAL is provided to integrate
- Complete dataset browsing and donwload
- Feature data download via API with examples
- Integration in web and fat client applications
Feedback / Survey
If you are interested in MeteoSwiss data or OGC API Features services, please answer our questions about the Proof of Concept.
Fill in our SURVEY (DE) or SURVEY (EN) which only takes about 10 min. Thank You!
Questions?
Please drop us an e-mail to customerservice@meteoswiss.ch with the subject POC OGD24
.
Tusky
A popular app compatible with Mastodon
Over the past days there has been a dramatic movement of users from Twitter to Mastodon, following change of ownership and worries of imminent changes on the platform. We have been covering DINAcon 2022 on the Fediverse, using open source apps.
Tusky is one of the ones being used, reputedly one the most stable/performant options for Android users.
{ hacknight challenges }
Install Tusky, pick a server, create a Mastodon account, start tooting! Here are some good starting points.
Volunteer to help moderate the server you are part of. Contribute financially - to the open source projects, as well as to the maintainers of the server you use. Help to translate Tusky into more languages, test for bugs, suggest feature improvements.
Install the local development environment (see Readme). Look through the open issues on GitHub, roll up those sleeves, and help out with the code works.
Tusky
Tusky is a beautiful Android client for Mastodon. Mastodon is an ActivityPub federated social network. That means no single entity controls the whole network, rather, like e-mail, volunteers and organisations operate their own independent servers, users from which can all interact with each other seamlessly.
Features
- Material Design
- Most Mastodon APIs implemented
- Multi-Account support
- Dark, light and black themes with the possibility to auto-switch based on the time of day
- Drafts - compose posts and save them for later
- Choose between different emoji styles
- Optimized for all screen sizes
- Completely open-source - no non-free dependencies like Google services
Testing
The nightly build from master is available on Google Play.
Support
Check out our FAQs, your question may already be answered. If you have any bug reports, feature requests or questions please open an issue or send us a message at Tusky@mastodon.social!
For translating Tusky into your language, visit https://weblate.tusky.app/
Head of development
This app was developed by Vavassor@mastodon.social. The current maintainer is ConnyDuck@chaos.social.
Development chatroom
#wirlernenweiter
wLw Karte
wir Lernen weiter braucht dich!
The association Wir lernen weiter (wLw for short) collects laptops from all over Switzerland, professionally refurbishes them, and then passes them on to people affected by poverty throughout the country. This is done through a large network of partners, who check the financial situation of the inquirer, and orders the laptops accordingly. People who need a laptop can contact the association via the website.
The basic situation is described on a form, for example, whether or not the person receives one receives social welfare or not. Depending on the combination of these questions, you will then receive a link, on which a map appears, which shows the partners for the respective situation. In the context of this concept, only the partners that are active in the social welfare context are discussed. On this map you can see all the municipalities in Switzerland and further information, where to contact and if the municipality is already part of wLw or not.
We could use your help in improving this part of our service. For more information on how to get involved, support or join the association, visit our website.
💸 Donations: wLw Spendenkonto
💸 Support Zorin OS (Linux distro used by wLw)
{ hacknight challenges }
Explore the current map of partner municipalities, get to know the project and it's extensive documentation (in the wikis).
Develop an alternative based on open maps, using the same dataset. See the request in the attached PDF for more detail. UPDATE: we are working on it, see README and LOG above.
Think about some other ways the open source community could support this project: from improving documentation, to maps of local Repair Cafés and Linux User Groups, to recruiting people in our community to volunteer some hours with users directly. By the way, wLw is looking for good techies here.
wLw-Partner-Map
Automating updates to wLw's partner map.
Contributed as part of DINAcon HACKnights 2022.
Run locally
make setup
make run-exporter
make run-map
Run with docker/podman
make setup
make images
make run-exporter-container
make run-map-container
# open http://localhost:8080
Challenges
BigCode
"A boost in performance, that's kind of like hiring 33% more coders"
The next generation of programmers will have new tools for improving the transparency of where code snippets and generated code is coming from. Leandro von Werra (machine engineer at Hugging Face) presented the BigCode project at DINAcon 2022: a research collaboration inviting us to pick up the tools, use the data, and become more conscientious of how we license and reuse open source code.
Photo of @lvwerra by Oleg Lavrovsky - CC BY 4.0
{ hacknight challenges }
Use Am I in The Stack? to see if your code is included in the project, and follow @BigCodeProject to stay up to date with developments. There is not yet a user-facing tool available, but stay tuned!
Learn to work with HuggingFace, where projects like the The Stack - 3 TB of permissively licensed source code that is the basis for BigCode's model weights and datasets - can be found: take the official course online. Example notebooks can be found on GitHub. If you manage to crunch some of this data, drop a link to your notebook on forum.opendata.ch.
Explore the BigCode Dataset and contribute to some of the engineering, ethical and legal issues being worked on. Do you have a relevant professional background? BigCode is a research collaboration that you can apply to join here.
BigCode Dataset
This repository gathers all the code used to build the BigCode datasets such as The Stack as well as the preprocessing necessary used for model training.
Contents
language_selection
: notebooks and file with language to file extensions mapping used to build the Stack v1.1.pii
: code for running PII detection and anonymization on code datasets.preprocessing
: code for filtering code datasets based on:- line length and percentage of alphanumeric characters.
- number of stars.
- comments to code ratio.
GoToSocial
Golang fediverse server
In addition to encouraging you to cover #DINAcon on the Fediverse, let's learn more about running your own open source server. While the official Mastodon server is the most widespread way, more and more alternatives are becoming available. Indeed, every web application can connect to the distributed social network using the ActivityPub standard - which we recently added to this platform (dribdat) as well!
The sandbox
GoToSocial is one such alternative service, written in the Go programming language. For this channel, we have set up a sandbox instance:
You need to contact us for an account. You also need to install a Mastodon-compatible client, as detailed below.
{ hacknight challenges }
Install an application that you can connect your account to, and start tooting!
Volunteer to help moderate the server you are part of. Help to translate GoToSocial into more languages, test for bugs, suggest feature improvements.
Install the local development environment (see Readme). Look through the open issues on GitHub, roll up those sleeves, and help out with the code works. Support and follow the project at Open Collective.
GoToSocial is an ActivityPub social network server, written in Golang. With GoToSocial, you can keep in touch with your friends, post, read, and share images and articles. All without being tracked or advertised to! GoToSocial is still ALPHA SOFTWARE. It is already deployable and useable, and it federates cleanly with many other Fediverse servers (not yet all). However, many things are not yet implemented, and there are plenty of bugs! We foresee entering beta around the beginning of 2024. Documentation is at docs.gotosocial.org. You can skip straight to the API documentation here. To build from source, check the CONTRIBUTING.md file.
#govtechhackdays
GovTech Hackathon
Take part and contribute in upcoming hackdays of Opendata.ch
DINAcon 2022 is an awesome venue, the HACKnights the perfect time, and you are the best crowd for a sneak preview of our plans to run the GovTech Hackathon next year organised by Opendata.ch: that will unite diverse offices with civil society in challenges to bring API-first, Open-by-default design to the services of digital government.
NOW PUBLIC DISCLOSURE - PLEASE RESHARE!
https://fosstodon.org/@OpendataCH@mastodon.social/109466868205267425
https://twitter.com/OpendataCH/status/1600107565576163328
🎟️ https://www.bk.admin.ch/govtech-hackathon
{ hacknight challenges }
Save the date! Sign up for the https://opendata.ch/newsletter to stay tuned, or remember to visit the website for more details early next year.
Join this project if you would like to set up a Pre-Event, help run a data prep workshop, or contribute to the event production in some other away.
The Hackathon will run on dribdat, the same platform as the HACKnights. Poke at the internals, see the Issues list, test the APIs: all the details are in the About page. Let's get ready to rrrumble.
OpenRefine
A free, open source power tool for working with messy data and improving it
OpenRefine (previously Google Refine) is a powerful tool for working with messy data: cleaning it; transforming it from one format into another; and extending it with web services and external data. OpenRefine always keeps your data private on your own computer until YOU want to share or collaborate. Your private data never leaves your computer unless you want it to. (It works by running a small server on your computer and you use your web browser to interact with it). People in the open data and data journalism fields around the world use it regularly - and so should you!
OpenRefine's packaging for MacOS and Windows could be improved in many ways, and we are looking for help in this front. We are looking for proposals from prospective contractors to improve the install experience of OpenRefine on MacOS and/or Windows. After a similar effort on Ubuntu/Debian packaging, this initiative is meant to improve the user experience on other platforms, to lower the install barrier for a broader audience. This project is funded by an EOSS Diversity and Inclusion grant from CZI. (See detailed links in the Pro section below)
{ hacknight challenges }
Download OpenRefine and start it on your computer. Download some open data and explore this powerful data wrangling tool.
Check out the results of the 2022 Community Survey. Clean up, analyse, or prepare a new dataset using OpenRefine, and share your results with the community here or at forum.opendata.ch. Help to translate OpenRefine.
Compile the OpenRefine code from source. As detailed in the latest OpenRefine blog post, we are hoping that proposals could solve some of the following issues:
- (MacOS) Provide an Applications shortcut in the DMG distribution, with a suitable background (#5205)
- (MacOS) Sign and notarize our DMG distribution (#2191)
- (Windows) Offer a proper installer / uninstaller on Windows (#3224)
- (Windows) Give an easier way to start and stop OpenRefine with a system tray integration and log viewer (#3221)
- (Windows) Configuration for OpenRefine on Windows should use only 1 config file (.ini) (#3057)
- (Windows) Sign openrefine.exe to eliminate extra security warnings (#3003)
OpenRefine
OpenRefine is a Java-based power tool that allows you to load data, understand it, clean it up, reconcile it, and augment it with data coming from the web. All from a web browser and the comfort and privacy of your own computer.
Download
Snapshot releases
Latest development version, packaged for: * Linux * MacOS * Windows without embedded JRE * Windows with embedded JRE
Run from source
If you have cloned this repository to your computer, you can run OpenRefine with:
./refine
on Mac OS and Linuxrefine.bat
on Windows
This requires JDK 11, Apache Maven and NPM.
Documentation and Videos
Contributing to the project
Contact us
Licensing and legal issues
OpenRefine is open source software and is licensed under the BSD license
located in the LICENSE.txt. See the folder licenses
for information on open source
libraries that OpenRefine depends on.
Credits
This software was created by Metaweb Technologies, Inc. and originally written and conceived by David Huynh dfhuynh@google.com. Metaweb Technologies, Inc. was acquired by Google, Inc. in July 2010 and the product was renamed Google Refine. In October 2012, it was renamed OpenRefine as it transitioned to a community-supported product.
See CONTRIBUTING.md for instructions on how to contribute yourself.
OSS Benchmark
Tracking the institutions and activity of local open source projects
On social media, events, and projects like the OSS Benchmark and OSS Directory, our community has important discussions about how we track source publications, evaluate quantity vs. quality, and verify responsibility.
Discover the range and breadth of the open source community at OSS Benchmark. Does it correlate to what you hear and see at DINAcon? Are any important institutions missing? Explore the accounts and repositories, look at their statistics, and collect some ideas of how this kind of data could be used.
This challenge builds on the discussion at DINAcon 2019:
https://hacknight.dinacon.ch/project/29
😻 Follow the Institute for Public Sector Transformation on GitHub.
{ hacknight challenges }
A community run list helps more people get involved in tracking the situation. Contribute at least 1 missing institution or project to the OSS Benchmark by opening an Issue, or starting a Pull Request on github_repos.json.
Run the data we have collected through your favorite open source data visualization tool and see if you could add some compelling criteria for it, such as cumulative stars or commits. There is a tip here for loading data into a Jupyter notebook.
Install the project locally, get it running on your machine, patch some of the open issues. Perhaps you could write a contributor's guide (#163), or add support for another kind of repo (#145)?
Visit our website!
Generate data
using docker
dependencies: docker
or python
docker build -t oss-github .
docker --name oss-github-runner run --rm oss-github
docker rm oss-github-runner
docker rmi oss-github
using python
cd ./data-gathering
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python OSS_github_benchmark.py
Start Visualization
dependencies: node
cd frontend
npm install
npm start
Explore the data with jupyter notebook
There is a jupyter notebook that loads a pickle-file of the data.
It's located at ./data-gathering/github-data.pickle
Deployment
git subtree push --prefix data-gathering prod master