3D reconstruction: ARA San Juan in the Atlantic Ocean

3D RECONSTRUCTION OF THE ARA SAN JUAN

CONTEXT

A missing Argentine naval submarine was found, a year and a day after it vanished in the South Atlantic ocean with 44 crew members on board. The wreckage of the ARA San Juan, which “suffered an implosion,” was found about 870 meters (2,850 feet) down on the ocean floor.

The Argentine navy had called off its rescue operation about two weeks after the sub’s disappearance, saying there was “no chance of survival” for its crew, but search efforts continued.

The navy said early Saturday that a “positive identification” had been made by a remote-operated submersible deployed by Ocean Infinity, a US firm commissioned by the Argentine government that began searching on September 7. On Sunday, Argentina’s navy released the first images of the sub on the seafloor under 2,975 feet of water nearly 400 miles east of the city of Comodoro Rivadavia in Argentina’s Patagonia region.

HOW THE 3D RECONSTRUCTION WAS DONE

On Tuesday, November 20, 2018, three days after the discovery of the remains of ARA San Juan, we received the first of the messages that would allow the reconstruction of the scene. The process took almost six months of teamwork. Interviews were conducted with a dozen people, including family members, experts and sources with access to images that were closed to the public.

The first draft of the 3D reconstruction began with an audio recorded by a relative of one of the 44 crew members of the Ara San Juan at the Mar del Plata Base on the morning of November 17 last year. The relatives were shown three photos that proved the finding of the sunken submarine 907 meters deep.

That first reconstruction was shared with people who were still sailing on their way to South Africa, in the Seabed Constructor, the ship with which Ocean Infinity did the search. Those who saw it were surprised by the reconstruction, because they still did not have a complete picture of the submarine and the rubble that surrounded it. Because of the darkness in that depth of the sea, they had only been able to see photos and videos of different parts of the submarine. Not a complete image.

That image would only be available in December, when the mosaic was made, a kind of puzzle, based on the 67,000 photos taken by autonomous underwater vehicles (AUV) during the early hours of November 17.

Technical problems in the opening of some of those images delivered to the Court led to delays in the presentation to the relatives, experts and the bicameral commission of the Congress that follows the inquiries about the sinking of the submarine.

Last April 23 was the day when, finally, more than 140 family members could see the mosaic and almost five hours of video showing the ARA San Juan at the bottom of the Argentine Sea. With the reconstruction drafts and pencil in hand, we met infographers, journalists and staff from other areas of the newspaper, who saw the images that allowed us to reach the result presented in this interactive infographic.

Technologies used:

The render was made with Carrara 3D Software and the animation was achieved with JavaScript by using a sequence of images.

 

Sin comentarios

Automated data-driven content

The project was born in the context of 2017 elections, with the purpose of creating news articles automatically from datasets using templates and allowed us to cover much more detailed results for all Argentina´s territory, so we did it to cover 530 districts with their results and local maps.

Then we found this could create an opportunity to help the newsroom in its usual work process in repetitive news that are data driven and that can be supported with series and graphs. So we began to produce daily and weekly articles, automated through a systematic collection of data from different sources and topics.

All these automatically generated articles are accompanied with infographics, images and interactive visualizations that are created from the automatic loading of data in Google Spreadsheets.

The first subjects covered by this project were thought in terms of the interest of the Argentine citizens: dollar, inflation and Argentine football. Three central subjects in the everyday life of our country.

Examples of automated data-driven content in LA NACION:

Dollar: https://www.lanacion.com.ar/economia/dolar-hoy-asi-cotiza-el-3-de-abril-en-banco-nacion-y-otras-entidades-nid2234795

Inflation: https://www.lanacion.com.ar/economia/la-canasta-de-precios-de-la-nacion-tuvo-un-alza-de-187-en-las-ultimas-cuatro-s-nid2232905

Football: https://www.lanacion.com.ar/deportes/colon-san-martin-de-san-juan-el-mapa-de-los-remates-y-quien-dio-mas-pases-nid2234453


Dollar , daily currency exchange

In Argentina the price of the Dollar has an important place in the life of the population. Every Argentine citizen knows the daily exchange rate because the fluctuation and volatility of the value of the Peso makes it necessary its daily monitoring. Our economic history developed an eternal distrust in the national currency in the Argentine people. And since the threat of recession and devaluation is always latent, a large part of the population´s savings are in Dollars. In addition, there is also the factor of high rates of inflation that prevents saving in Pesos because it quickly loses value. In fact, people are so interested in this topic that it has a permanent place in the home of all media.

Before automatization,  every morning a journalist tracked the day’s exchange rate price and wrote several articles according to the variation of the Dollar throughout the day.

This was useful as a clear example to automate. That is to say, to ease work of economic journalists and to assure that each article is consistent with the bank rates informed and an infograph to visualize the variation over time.

The automatic article that is created in the morning informs the rate of the previous day, then it is updated with the rate of the day and in the afternoon a new article is created at the closing of the rate. This article also makes easier the work of the journalists since when there is a significant rise or fall, the journalist may use it as a basis to create more content in the article: explain reasons, consult experts and add them as a source.

 

Inflation is another central subject in the daily life of Argentine people. In fact, inflation in 2018 was the highest in 27 years: it reached 47.6% yearly. There are several sources of information about price variation, but these end-user reports end up being abstract numbers because they do not reflect their daily reality and do not represent the real impact on their economy.

For this reason, we decided to create a platform where the price of products sold in supermarkets can be automatically tracked and in real time. We created “Canasta LA NACIÓN” (LA NACION Basket), a price monitor that allows a weekly and monthly monitoring of the price of several products such as noodles, soft drinks and toothpaste, among others.

And, from that moment onwards, once a week we produce an automatic article with basic food basket information of 43 products for digital and paper versions with corresponding interactive display.

 

Soccer:

In order to give more context and cover all soccer games, we automate the articles of the different matches of the Argentine Soccer Association. The automatic articles are summarized in three:
a) An interactive visualization with the passes, shots and movements of the players of the two teams of each match that is played

b) 24hs before a match, an article is created with minimum information and already pre-loaded widgets so that the match updates and modifies during the match.

c) A summary of all the matches that are played throughout Argentina, that is to say that it includes automatic articles of the local matches that are played in the different provinces.

INNOVATION

It project is innovative because we found a way to work in teams with our editors that understood how technology and data can help them better serve audiences and reach larger audiences thanks to a broader coverage. We developed an in house automatization platform customized to each data source and work with our graphics area as well to have the data visualizations ready.

Work that was repetitive and tedious became, thanks to this process and technology, a successful and efficient task that preserves the touch of quality journalism.

The choice of topics helped change the newroom culture since several traditional journalists are experiencing the benefits of technology and the context data gives to support their stories,

IMPACT

The impact of the project was immediate. Particularly, with the increase of entries in all the topics considered . But besides helping with figures and traffic, it also developed a cooperation culture between traditional journalists , data producers and developers. Now, journalists are approaching LA NACIÓN DATA to request the automation of different articles in order to add value to their work and speed up those productions that need most of the human work.

People constantly comment, claim, question and give opinions on inflation, football and Dollar and to do so they share news articles of LA NACIÓN.

Source and Methodology

The beginning of the cycle for creating an automatic article flow has several origins. In some cases, the same editor realized that the same article was repeated frequently and asked us about the possibility of automating such articles to lighten work of the journalist in charge. In others, it came up from the lopportunity to cover more topics and sub topics like all the soccer games in the main and second main league in Argentina, every week.

Once we confirm the possibility and need for automation, we research the source and origin of data. The source may be:

- A database regularly and manually completed by the journalist
- an API
- Scrapping and building a dataset from scratch

The next step was to create multiple text possibilities according to the variation of data. These texts are conditional and are used to produce thousands of articles that will work as text templates based on the figure obtained.

The last step was to configure and collect data with the generation of articles and define a date and time (cron) for publication that is done automatically without the need to go through the hands of an editor.

Technology

The project has a serverless architecture using the AWS stack. Each automatic article has its Lambda written in Python. In cases where the data source is a dynamic site made in javascript Node.js is used with Headless Chrome to do the scraping. As a database we are using PostgreSQL.

We use Google Docs API and present updated graphs in the front end using Javascript and Tableau Public.

Sin comentarios

A year after the disappearance of the submarine ARA San Juan

Unpublished details of the disappearance

CONTEXT

To commemorate the first anniversary of the tragedy of the ARA San Juan Submarine, a scrollytelling was made to present all the details and data about the sinking of the submarine. It was one of the most tragic events of the past few years as it involved the death of 44 crewmen that were on board.

The purpose was to develop a visual piece with new data from the year-long official investigation to shed light on some of the uncertainties and controversies of the case. The story-telling was concieved for a wide audience without segmentation as the subject emotionally involved the whole of society. 

INNOVATION

The intention was to bring together all in the same development, a multiplicity of multimedia resources and to make available exclusive data on the subject. Its a vertical story-telling that only requires the user a minimum effort of interaction: scrolling. It includes data from official naval reports, videos of the crewmen families-specially produced and created for the occasion-, audios of the official naval announcements, 3D animations of the sumbarine made with two programs called Carrara and Blender, and satellite geolocated data of the search operations collected with Marine Traffic vessel plataform.

Everything was designed in a way that the story-telling surprises the user with the succesive appearance of these elements as the story goes on.

Sources and methodology 

To create contents for the project, official reports of the investigation were searched and provided. This allowed us to create and inform what had happened as faithfully as possible (something that had not been done until now). On the other hand, the transcriptions of the communications between the submarine and the operation base were obtained. This enabled us to georeference key moments of the voyage and subsequent sinking of the submarine.

Moreover, the data service of marinetraffic.com was used to recreate the search of the submarine. The tool is a global pioneer in vessel tracking that we got to know in a data journalism course at the International Journalism Festival in Perugia, ItalyWe. Furthermore, we also contacted the families of the victims from whom we collected data as they provided testimonial evidence of the last Whatsapp messages delivered and recieved from their children before the fatal outcome.

Technology

The main purpose was to create and deliver different types of content through the scrolling. When a new section of the scrolling enters the scene, something should happen that draws the attention.

For such purpose, we used a Javascript library called “Scrollama.js” that helped us to detect which part of the interactive was on screen and trigger different events. The challenge was also to be able to control the videos with the scroll and to do so, we manage to preloaded them before the user gets to watch them on screen. For this last part we used our JS Player video hosting platform.

Another important part of the interactive was to be able to show a 3D sequence of the sinking of the submarine, given the need to visualize it in high quality, we divided the video in image sequences with Python software (so the user could download the image sequence by scrolling). In that way, we made sure that the image could be seen correctly and the sequence could be consistently controlled.

Another useful JavaScript library was Anime.js which helped us to do the animation using the scroll for the final part of the voyage of the submarine on a SVG line.

IMPACT

The development had a very good performance in metrics, the time spent on the page was 4.26 m rather above the average and a very good number of unique users: 76.762. It was also among the three most visited articles of the day.

 

Sin comentarios

Cuadernos de la Corrupción: la investigación que cambió la historia política argentina

INVESTIGACIÓN

TOTAL DE NOTAS

El trabajo de investigación se inició formalmente el 8 de enero de 2018, cuando un periodista de La Nacion, Diego Cabot, tuvo un encuentro en el domicilio de una de sus fuentes. En la reunión, le entregaron ocho cuadernos confeccionados por un chofer de Roberto Baratta, uno de los funcionarios más importantes del exministerio de Planificación Federal que condujo Julio De Vido, el más relevante en los 12 años de gobiernos kirchneristas que fueron desde 2003 a 2015.

Los cuadernos contenían un registro detallado de cada uno de los recorridos de auto que realizó durante 10 años -sólo con algunas interrupciones- el chofer con Roberto Baratta y otros funcionarios para recolectar el dinero de retornos/coimas de grandes compañías de la Argentina a las que se le había adjudicado un contrato de Obra Pública.

Desde el inicio de la investigación, Diego Cabot y el pequeño grupo de trabajo que lo acompañó en todo el proceso, tomó una decisión fundamental. Dada la cantidad de exfuncionarios y de empresarios de primera línea involucrados, se determinó no publicar nada hasta no tener certezas de que lo que estaba escrito era verdad. Nadie por fuera de ese grupo supo de la investigación que se llevaba adelante.

 

En una primera etapa, se transcribieron los 8 cuadernos a una base de datos de Excel. Y se comenzó a trabajar en cada uno de los registros con todos los elementos objetivos que eran posibles de chequear. Se listaron:

-Nombres y cargos

-Domicilios de origen y destino de los recorridos

-Dominios de los autos

-Denominación de compañías

-Lugares de entregas

-Datos personales de las personas identificadas en los momentos en los que se pagaba la coima.

-Montos de los sobornos

 

En el proceso de chequeo y verificación se consultaron:

-Registros oficiales y societarios

-Se recorrieron gran cantidad de domicilios

-Se recolectó la información societaria de las empresas involucradas

-Los pagos que se hacían se cotejaron con los detalles de la contabilidad del Estado

-El pago de las coimas se cruzó con la información sobre las licitaciones y contratos de obras públicas y el porcentaje de ejecución de cada una de ellas para ver si había relación con los montos de dinero de las coimas.

 

Luego del chequeo y con la certeza de que era todo verdad, el grupo entendió que se trataba de la trama de corrupción más grande que jamás se haya develado en la Argentina.

 

El material que contenía la trama de corrupción era un documento privado y dado los antecedentes en la Argentina, la publicación en LA NACION podría generar que algunas pruebas desaparezcan. Es por esto que en marzo de 2018, el periodista inició una negociación informal con un fiscal federal (Carlos Stornelli) y el 10 de abril se presentó la denuncia formal ante la Justicia.

 

Tanto en la fiscalía, como en el Juzgado como en LA NACION hubo un compromiso: nadie podía saber de la causa y el silencio debía ser absoluto. “Si se filtraba la investigación corríamos riesgo de muerte”, dijo meses después el Fiscal del caso.

 

En la madrugada del 1 de agosto, siete meses después de que Diego Cabot se hizo de la prueba, la causa tomó estado público con una ola de 17 detenciones y 36 allanamientos.

 

El proceso se caratuló con el nombre de la expresidenta de la Nación:  “Fernandez Cristina Elizabet y otros s/ asociación ilícita” (9608/2018). El juez consideró que los datos aportados dan cuenta de la existencia de una organización delictiva conformada por funcionarios públicos, quienes valiéndose de medios oficiales (incluyendo vehículos, empleados, equipos de telefonía celular, etcétera) y comandados por quienes fueran titulares del Poder Ejecutivo Nacional (Néstor Carlos Kirchner y Cristina Elisabet Fernández) y del Ministerio de Planificación Federal (Julio Miguel De Vido), entre los años 2003 y 2015, procuraron la percepción de sumas de dinero ilegítimas, por parte de diversos particulares, muchos de ellos, empresarios contratistas de la obra pública del Estado Nacional.

Ese mismo día, el equipo de datos decidió recopilar todas las hojas de cálculo con las que se había trabajado en la primera etapa para empezar a profundizar en el análisis y procesamiento de los registros de los cuadernos. En primer lugar, se realizó un chequeo de las transcripciones para verificar que los montos descritos, las personas, el lugar de origen y destino que aparecían escritos a mano en los cuadernos eran los correctos. Y se aprovechó ese proceso de verificación para hacer un riguroso proceso de normalización y estructuración de los datos en los que se unificó la moneda y las descripciones y se estandarizaron los nombres y direcciones. También se agregó la fecha y hora exacta del recorrido.

Por otra parte, se construyó una metodología de clasificación de cada recorrido que se dividió según: a) si se trataba de una entrega del bolso de dinero a la cabeza del plan delictivo de coimas b) si consistía en la recaudación del dinero que le entregaban las empresas como retorno. Además, se incluyó una serie de tags para identificar la relevancia de cada registro.

A partir de toda la clasificación creamos múltiples diccionarios de equivalencias porque los cuadernos contenían múltiples términos para el mismo concepto. Ejemplo: “bolso” = “valija” = “maletín”. Lo mismo realizamos con nombres personales y domicilios.

Todo el trabajo de investigación periodística se acompañó con diversas aplicaciones y visualizaciones exclusivas que permitieron al usuario recorrer de forma digital e interactiva todos los cuadernos y las novedades de actualidad de la causa.

Entre algunas de las investigaciones aparecen:

IMPACTO

El expediente acumuló en un mes 35 cuerpos de actuaciones, además de la voluminosa documentación y anexos. Se practicaron 70 allanamientos, 36 en la Ciudad de Buenos Aires, 24 en la provincia de Buenos Aires, 7 en Misiones, 2 en Santa Cruz y 1 en Mendoza. Además, el Senado de la Nación trató en el recinto el pedido del magistrado para allanar los domicilios particulares de la expresidenta Cristina Kircher, actualmente senadora y poseedora de fueros. La votación en el cuerpo legislativo terminó 66 votos a favor y ninguno en contra y se autorizó el procedimiento.

 

En los allanamientos se secuestraron más de doscientos cincuenta (250) obras pictóricas en posesión de dos imputados por aproximadamente pesos treinta y siete millones ($37.000.000,00) y pesos cuatro millones quinientos mil ($ 4.500.000,00). Además, como consecuencia de ello se han secuestrado aproximadamente, más de seis millones trescientos cincuenta mil de pesos ($ 6.350.000,00), más de un millón doscientos cuarenta y cinco mil dólares estadounidenses (U$S 1.245.000,00), más de doscientos treinta y cinco mil euros (€ 235.000,00), como así también sumas de dinero expresadas en chilenos, uruguayos y reales, nueve (9) armas de fuego, equipos de telefonía móvil y electrónicos y finalmente, una gran cantidad de rodados de diferentes marcas y modelos, muchos de ellos de alta gama.

En total y hasta el momento se han procesado a 53 personas entre los que se cuentan un expresidente, todo el despacho del Ministerio de Planificación Federal y los principales contratistas de obra pública de la Argentina. Alrededor de 35 personas se han arrepentido y confesaron sus delitos. Se han trabado embargos por 540 millones de dólares.

Por la investigación de los Cuadernos de las Coimas, Diego Cabot y su equipo ganaron el Premio Nacional de Investigación del Foro de Periodismo Argentino y el Premio Internacional Rey de España.

Sin comentarios

LA NACION DATA Website

Meet the team!

>>> lanacion.com.ar/data <<<

Project Description

LA NACION DATA website is not just a website, it´s a strategy, a project and a team. The strategy has to do with the commitment of using data to tell stories and expand the use of data, preferably open data, to activate demand of public information, in a country that just passed a FOIA law.

The internal strategy is to work in teams with journalists, tv producers and infographers so we just facilitate the data and help in the investigation or analysis, but they are the ones who domain the topic and know how to tell the stories.  Seguir leyendo

Sin comentarios

Where did the main political forces win and loose votes

In 2017 were the legislative elections in Argentina. Not only the new representatives of the Congress are elected, it is an opportunity for the political forces to measure their power in each province.

The map shows the 2017 Elections results for each political force, and the user can compare them with the primary elections and with the 2015 presidential elections of 2015.

This tool not only displays the votes results in absolute values, also the percentage of votes that each political force won or lost. The user can click on each province and see the result for each location in in a pop-up

The innovative thing is the design, the way of presenting the information. We chose to represent the provinces of our country through squares, and give each one a simple graph of arrows that reflect that percentage of votes each politcal force won or lost.

 

How we did it?

We looked for the official data results of: the presidential elections of 2015, the primary elections 2017 and the general elections 2017.

With all the information, we categorized the political forces according to three major groups: the party Cambiemos, the party that responds to the Kirchnerismo and the peronismo no kirchnersita.

The political groups that do not identify with these three forces were not taken into account.

We used: d3js, underscore.js, jquery, css3, sass, node, gulp.

Sin comentarios

Mortgage loans simulator

In Argentina, for more than ten years, only those with good salaries have access to mortgage loans. There were no flexible offers and the rates were very high.

With the new government, this changed. A new type of credit was created, called a mortgage loan adjusted for inflation. And it boosted the housing loan market.

What’s new? This loan begins with a low initial quota, which increases over time according to the national inflation index.

When analyzing the convenience of a housing loan it is necessary to understand that the amount to be requested is conditioned by salary and age, among other variables. People wasn’t used to this new credit and how to calculate it. That’s why LA NACIÓN developed a mortgage loan simulator that allows the user to know their financing possibilities in a practical way.

This is the first time that we made a tool based on open data working together with the commercial area of the newspaper. 

How we did it?

We used the data on mortgage loans published by the Central Bank and we generate a calculator that allows the user to find their best option.

At the same time, we allow interested banks to place their corporate logos through a commercial campaign that included a contact button, so that the user can send their personal data so that the bank can contact him.

Watch the next video and look how it works! 

 

All the banks in our country are obliged to inform the Central Bank of the Argentine Republic of all their products and services, with the detailed information of each one. This information is open to the public on its website and we use the dataset on mortgage loans.

First, we made a survey and chose the main banks that operate in the entire country. Then, we tried hard to understand the formula to calculate the first installment of a loan, and the last installment of the first 3 years of that loan (in Argentina we are interested in knowing how inflation will affect our credit obligations).

With the formula in our hands, we translated the calculation into Javascript language, creating a tool that allows the user to choose their own filters (type of house, type of credit) and enter the necessary data (such as salary and age) to be able to offer the result that each bank has for its situation.

After that, we dedicated to the design of the tool. There was a lot of information to show, and it must be understandable.

Impact

The note where the tool is embedded was published on September 17 and since then has accumulated almost one million page views (999,559 to be more specific). 30% of the entries to the note were generated from Google, thanks to the excellent indexation that allowed La Nación to become the first reference of consultation in mortgage loans.
The tool was also embedded in all the articles about mortgage loans in LA NACIÓN, and in other media websites. This allowed to overcome the amount of pageviews reaching a million and a half visits and an average of seven minutes in the tool.
Regarding the commercial aspect of the project, the first banks that participated were three: Supervielle, Santander Río and HSBC. After month and a half of publication the Macro and the ICBC were added.

Sin comentarios

Candichat, an interview to the candidates through WhatsApp

Date of publication

17-10-2017

>>> Live Link <<<

 

Description of the project

This project was conceived to join citizens and candidates to legislator in view of the crucial legislative elections in Argentina in 2017. To achieve an attractive experience for any user, it was designed a platform which simulated a conversation with politicians through WhatsApp -the communication means most widespread among Argentine people. Answers were real and questions had been already made to candidates who privately answered through WhatsApp to a LA NACION journalist. All candidates were made the same questions, and then each user may choose any question to “chat” with his/her candidate. When LA NACION interviewed candidates through WhatsApp, they were specially asked to use all available resources of the communication application: audio, emojis, videos, etc.  Seguir leyendo

Sin comentarios

Legislative Elections 2017: map with live results province by province

A unique and valuable service for the the users to know the legislative election results of each district and province in real time.

In addition to the map with the percentage of votes for each province, the tool offers:

- The option to see the results not only by province, but also by the districts that comonen each of them.

- The name, photograph and political orientation of each of the 4.216 senator and deputies candidates,

- The results they obtained with the absolute value and the percentage of votes.

- The number of votes counted at a given time

- A switcher which allows the user to compare the actual results with the primary ones and a color palette for each political orientation with gradual transparency that allows the user to see the distribution of votes between them.

-  Another switcher to know the results by deputies or senators.

The update of the results in real time was automated and the users could see the date and time of the last modification.

 Live link  

 

Impact

The map was replicated in several news outlets, even our main media group competitor published the app on their TV programs.

The election coverage was trending topic on Twitter, and many of the politicians of the different districts shared the web app on their social media.

It even obtained 1.170.000 pageviews and 40% of the traffic came from Google.

 

How we did it? 

We divided the process in multiple steps.

Firstly, we downloaded and validated the election data given by the government.

Then, we saved it into a database to later process it and created different JSON’s depending on app needs.

We tried to write small process so it is easier to parallelize the work and the runtime process.

In addition, to show the seats that each candidate won, we calculated and programmed the formula taking into account the D’Hont system for the deputies chamber and in the case of the senators it was easy because it is one representing the majority and the other representing the minority.

We used Python and PostgresSQL as the database for processing and saving the official data provided by the government. Also, Javascript and D3.js for rendering the map, and the Amazon services for hosting the backend and frontend.

Sin comentarios

Using DevExtreme, HTML5 and Javascript to connect reusable data visualizations with updated data

Since 2012 Lanacion.com keeps an open data catalog, with updated datasets. Besides, we report stories with open data published in Google Spreadsheets .

Most of these datasets are manually updated and we use Tableau Public to illustrate most of our stories with interactives as we cannot afford to have developers, we have only one.

Since 2013 our dataviz designer learned D3.js and javascript to develop our own visualizations and we detected many reusable ones that could serve as useful context information for many stories.

He selected DeveXtreme: a crossplatform HTML5 / Javascript tool to create responsive web applications for touch enabled devices and traditional desktops.

Examples 1)  Dollar: official and “blue” prices of dollar conversion to pesos, daily updated

 

Example 2) Central Bank stock of dollar reserves (Reservas del Banco Central) , weekly updated  Seguir leyendo

Sin comentarios