Digital Commerce Blog - Blackbit

The new version is here: Pimcore Data Director 3.6

Written by | Apr 11, 2024 10:00:00 AM

Our developers are constantly expanding and optimising our import and export bundle for Pimcore. In version 3.6, you can once again expect many new small and large improvements that will make your daily data operations more efficient.

Performance improvements in version 3.6

  • Enormous reduction in memory consumption for exports
  • Creation of raw data chunks on the PHP side to avoid having to paginate raw data in MySQL, since raw data-IDs do not require a lot of memory. Previously, pagination of raw data was very resource intensive, depending on how much raw data the system has stored.
  • Improved detection of changes in field collections. This means that unchanged field collections are recognised better and the data object does not need to be saved. The result: faster imports.
  • Prevention of multiple loading of remote assets during import.
  • Many smaller refactorings prevent the same code from being executed again and again. As import and export tasks normally repeat the logic for all elements with each run, there was a lot of potential here to skip sub-processes and thus increase performance.

Parameterised dataports

Only parameters that are used in the dataport (e.g. in filter conditions or callback functions) are saved in the dataport resource. This drastically reduces the number of dataport resources. For example, it now makes no difference whether two different users perform an export if the Dataport does not use the requesting user as a parameter, e.g. . This results in significantly fewer raw data duplicates, which benefits memory consumption and runtime.

In addition, parameters can now also be accessed if only the "Process raw data" step is executed. Previously, this was only possible if the parameters were saved in raw data fields, otherwise the parameters were not available.

Multi-level object wizards

Object wizards are forms in the Pimcore backend that you can use to create a user interface for common data maintenance or export tasks. With multi-level object wizards, this concept is now even more powerful. A multi-level object wizard consists of one or more individual object wizard data ports that are linked together in a pipeline. Practical use cases are, for example

  • Inserting a list of article numbers into a text field (Dataport 1), parsing these numbers and passing them as parameters to Dataport 2.
  • Dataport 2 is an object wizard form with a many-to-many object relationship. This relationship field is pre-filled with the data from Dataport 1. This allows you to check which products were found in the copy-paste list and should be changed or exported.
  • What is even more remarkable is that you can also use it to create branching wizards: Depending on the data of an object wizard form, the next page can be Dataport 2 or Dataport 3. In this way, you can set up complex wizards as you know them from the application installation in Windows/MacOS.

Optimisation of general Pimcore functions

  • Tracking changes to open objects: If another user or Dataport changes an object, for example, the tab is reloaded, provided there are no changes to the object data.
  • In the main menu there is an entry for opening objects by ID/path, which is now supplemented by a submenu for each indexed field, e.g. to open objects directly by article number.
  • Automatic setting of the class icon (if not already present) to a random object icon, so that different classes have different coloured icons. This makes it easier to distinguish between objects of different classes at a glance.
  • Automatic tab management: If the available width of the tab bar is exceeded, the tab that has not been accessed for the longest time is automatically closed.
  • Integration with dachcom-digital/formbuilder: The Data Director is supported as an API channel for dachcom-digital/formbuilder, so you can create front-end forms with the form builder and implement the processing logic in the Data Director.

Dataport settings

  • Support for splitting import data into blocks of 10,000 records, which are then imported in multiple parallel processes to optimise performance.
  • Support for descending sorting of raw data
  • Authorisations
    • Added the "Data Director Admin" authorisation for setups with users who are not Pimcore admins but should still be able to access data ports.
    • Newly created dataports are automatically shared with users who have the same role as the person who created the dataport.
  • Import archive
    • For URL-based imports, the archive file is now named with the correct file extension. For example, .json for a JSON-based import - previously the file extension was .tmp. This allows you to view the respective content of the archive files in the Pimcore backend.
    • For parameterised imports, the archive files are now grouped according to parameters, e.g. for the import resource http://example.org/api?product= the archive file is stored in /archive/ABC if the parameter "product"=ABC is used.
    • Grouping of archive folders by date: An archive file used to be called 2024-01-15-12-00-00-example.csv, now it is stored in folders: 2024/01/15/12-00-00-example.csv

Attribute mapping

  • Support for importing localised asset metadata with the syntax ['fieldname#en' => 'value'].
  • Added callback function template for HTML to text conversion.

Dataport execution window

  • Add input fields for parameters of a parameterised Dataport resource. For example, if http://example.org/api?product= is used, the parameter "product" can be entered in the Dataport execution window.

Other changes

  • Support for importing all files from folders, even if the -rm flag is not used. This makes it possible to import complete folders from the Pimcore backend.
  • French and Italian are now available as UI languages.
  • The creation of video thumbnails in exports is supported.
  • Dataports that are executed via the context menu of the element tree are now also executed with force=1. Otherwise, iterative exports would not be executed again for successive calls with unchanged data, making testing more difficult.
  • Date fields are now supported as key fields.
  • Attribute mapping: Better preview for relations. Previously json_encode() Since almost all fields are protected, practically nothing was displayed, now the element is serialised.
  • JSON parser
    • Support for JMESpath/JSON pointer conversion, makingproducts equivalent to data/products.
    • Support of ../ for accessing JSON data via the actual article data.
  • Pimcore element-based exports: Support for enabling or disabling inheritance for classes that do not allow inheritance but have localised fields with at least one fallback language.
  • Removal of moontoast/math, as it is no longer being developed.
  • ObjectBricksOptionProvider added: There is now the option provider "@DataDirectorObjectBricksOptionProvider" for selection fields.
    A practical use case: You can define at category level which object brick applies to all products that are assigned to this category. To do this, you need to set up a data port that retrieves the brick name of the category and "imports" the corresponding brick into the object brick container field. Activate Run automatically on new data to always run this dataport automatically when a product object is saved. This automatically assigns the object bricks to the products depending on which category they are assigned to.
  • Regular optimisation of the plugin_pim_rawItemData table to reduce the required hard disk space, analogous to pimcore/pimcore#11817.

Helpful video tutorials for the Pimcore Data Director

We offer detailed instructions and many practical tips on the efficient use of the Pimcore Data Director in the video tutorials in the Blackbit Academy and on the Blackbit YouTube channel.

Not yet familiar with the Data Director Bundle?

If you would like to get to know our powerful import and export bundle better first, why not try it out in our free demo installation?
Would you like to test a specific use case? Then please send us your task and your data. We would be happy to offer you a workshop in which we show you how to solve your individual requirements with the Data Director.