Design, creation, supervision, administration, implementation, and management of data structures in Oracle, MariaDB, and MongoDB.
Validation of massive data loads at the staging level and their publication in the production environment.
Evaluation of new technologies and their adaptation to the business without impacting the business.
Database status reports.
Migration from Oracle Dataguard 12 to 19, cluster structure redesign.
Recreation of tables and processes to improve their times.
Hash, Time, and IDx partitioning.
Adaptation of new systems to consume or extract data in materialized form and publication in flat files for external consumption as Data Science (Python).
Management of users, roles, and access profiles.
Work with the IT - Development - Data team on new processes, validating functionality in test and production environments.
Creation of views or tables for data consumption by external services such as Power By and MICROSTRATEGY (monitoring and process improvements).
Data monitoring and tracking via ETL (from source to destination) using tools such as Visual Studio.
Structure adaptation for use with Datalake.
Big data work, use of Percona tools.
MongoDB database administration (users, resource usage, CollScan identification, index creation)
Use of reverse engineering tools.
Cloud knowledge:
OCI, On-premise Exadata migration to Autonomous, service, vCN, object storage, users, access.
AWS, RDS databases (Postgress, MySQL, MariaDB, Oracle) - EC2 (Linux installation and databases such as MariaDB in a Cluster with Maxscales, Rhel with Oracle Dataguard) - Work with S3 (backups and object management). RDS Engine upgrades, instance type changes.
Cassandra, administration, monitoring, tuning, data migration.
GCP, Spanner databases, IDX creation monitoring.