Saturday, April 18, 2026
banner
Top Selling Multipurpose WP Theme

my first n8n workflow, as an information scientist, it felt like I used to be dishonest.

I may hook up with APIs with out studying 30-page docs, set off workflows from Gmail or Sheets, and deploy one thing helpful in minutes.

Nonetheless, the numerous downside is that n8n shouldn’t be natively optimised to run a Python atmosphere within the cloud situations utilized by our clients.

Like many knowledge scientists, my each day toolbox for knowledge analytics is constructed on NumPy and Pandas

To remain in my consolation zone, I typically outsourced calculations to exterior APIs as an alternative of utilizing n8n JavaScript code nodes.

Manufacturing Planning n8n workflow with API perform calling – (Picture by Samir Saci)

As an example, that is what is completed with a Manufacturing Planning Optimisation instrument, which is orchestrated by a workflow that features an Agent node that calls a FastAPI microservice.

This method labored, however I had purchasers who requested to have full visibility of the information analytics duties on their n8n consumer interface.

I realised that I have to study simply sufficient JavaScript to carry out knowledge processing with the native code nodes of n8n.

Instance of JavaScript node grouping gross sales by ITEM – (Picture by Samir Saci)

On this article, we are going to experiment with small JavaScript snippets inside n8n Code nodes to carry out on a regular basis knowledge analytics duties.

For this train, I’ll use a dataset of gross sales transactions and stroll it by to an ABC and Pareto evaluation, that are extensively utilized in Provide Chain Administration.

ABC XYZ & Pareto Charts extensively utilized in Provide Chain Administration – (Picture by Samir Saci)

I’ll present side-by-side examples of Pandas vs. JavaScript in n8n Code nodes, permitting us to translate our acquainted Python knowledge evaluation steps immediately into automated n8n workflows.

Instance of JavaScript vs. Pandas – (Picture by Samir Saci)

The concept is to implement these options for small datasets or fast prototyping throughout the capabilities of a cloud enterprise n8n occasion (i.e. with out group nodes).

The experimental workflow we are going to construct collectively – (Picture by Samir Saci)

I’ll finish the experiment with a fast comparative research of the efficiency versus a FastAPI name.

You’ll be able to observe me and replicate all the workflow utilizing a Google Sheet and a workflow template shared within the article.

Let’s begin!

Constructing a Knowledge Analytics Workflow utilizing JavaScript in n8n

Earlier than beginning to construct nodes, I’ll introduce the context of this evaluation.

ABC & Pareto Charts for Provide Chain Administration

For this tutorial, I suggest that you simply construct a easy workflow that takes gross sales transactions from Google Sheets and transforms them right into a complete ABC and Pareto charts.

It will replicate the ABC and Pareto Evaluation module of the LogiGreen Apps developed by my startup, LogiGreen.

ABC Evaluation Module of the LogiGreen Apps – (Picture by Samir Saci)

The objective is to generate a set of visuals for the stock groups of a grocery store chain to assist them perceive the distribution of gross sales throughout their shops.

We’ll deal with producing two visuals.

The primary chart exhibits an ABC-XYZ evaluation of gross sales gadgets:

ABC XYZ Chart – (Picture by Samir Saci)
  • X-axis (Share of Turnover %): the contribution of every merchandise to whole income.
  • Y-axis (Coefficient of Variation): demand variability of every merchandise.
  • Vertical pink strains cut up gadgets into A, B, and C lessons primarily based on turnover share.
  • The horizontal blue line marks steady vs variable demand (CV=1)

Collectively, it highlights which gadgets are high-value & steady (A, low CV) versus these which can be low-value or extremely variable, guiding prioritisation in stock administration.

The second visible is a Pareto evaluation of gross sales turnover:

Pareto Chart generated by the Logigreen App – Picture by Samir Saci
  • X-axis: share of SKUs (ranked by gross sales).
  • Y-axis: cumulative share of annual turnover.
  • The curve illustrates how a small fraction of things contributes to nearly all of income.

Briefly, this highlights (or not) the traditional Pareto rule, which affirms that 80% of gross sales can come from 20% of the SKUs.

How did I generate these two visuals? I merely used Python.

On my YouTube channel, I shared a complete tutorial on how you can do it utilizing Pandas and Matplotlib.

The target of this tutorial is to arrange gross sales transactions and generate these visuals in a Google Sheet utilizing solely n8n’s native JavaScript nodes.

Constructing a Knowledge Analytics Workflow in n8n

I suggest to construct a workflow that’s manually triggered to facilitate debugging throughout growth.

Last Workflow manually triggered to gather knowledge from Google Sheets to generate visuals – (Picture by Samir Saci)

To observe this tutorial, you could

Now you can join your duplicated sheet utilizing the second node, which can extract the dataset from the worksheet: Enter Knowledge.

Join the second node to your copy of the Google Sheet to gather enter knowledge – (Picture by Samir Saci)

This dataset contains retail gross sales transactions on the each day granularity:

  • ITEM: an merchandise that may be offered in a number of shops
  • SKU: represents an `SKU` offered in a selected retailer 
  • FAMILY: a gaggle of things
  • CATEGORY: a product class can embody a number of households
  • STORE: a code representing a gross sales location
  • DAY of the transaction
  • QTY: gross sales amount in models
  • TO: gross sales amount in euros

The output is the desk’s content material in JSON format, able to be ingested by different nodes.

Python Code

import pandas as pd
df = pd.read_csv("gross sales.csv") 

We are able to now start processing the dataset to construct our two visualisations.

Step 1: Filter out transactions with out gross sales

Allow us to start with the easy motion of filtering out transactions with gross sales QTY equal to zero.

Filter out transactions with out gross sales utilizing the filter node – (Picture by Samir Saci)

We don’t want JavaScript; a easy Filter node can do the job.

Python Code

df = df[df["QTY"] != 0]

Step 2: Put together knowledge for Pareto Evaluation

We first have to mixture the gross sales per ITEM and rank merchandise by turnover.

Python Code

sku_agg = (df.groupby("ITEM", as_index=False)
             .agg(TO=("TO","sum"), QTY=("QTY","sum"))
             .sort_values("TO", ascending=False))

In our workflow, this step will likely be performed within the JavaScript node TO, QTY GroupBY ITEM:

const agg = {};
for (const {json} of things) {
  const ITEM = json.ITEM;
  const TO = Quantity(json.TO);
  const QTY = Quantity(json.QTY);
  if (!agg[ITEM]) agg[ITEM] = { ITEM, TO: 0, QTY: 0 };
  agg[ITEM].TO += TO;
  agg[ITEM].QTY += QTY;
}
const rows = Object.values(agg).type((a,b)=> b.TO - a.TO);
return rows.map(r => ({ json: r }));

This node returns a ranked desk of gross sales per ITEM in amount (QTY) and turnover (TO): 

  1. We provoke agg as a dictionary keyed by ITEM
  2. We loop over n8n rows in gadgets
  • Changing TO and QTY to numbers
  • Add the QTY and TO worth into the operating totals of every ITEM
  1. We lastly rework the dictionary into an array sorted by TO desc and return gadgets
Output knowledge of the aggregation of gross sales by ITEM – (Picture by Samir Saci)

We now have the information able to carry out a Pareto Evaluation on gross sales amount (QTY) or turnover (TO).

For that, we have to calculate cumulative gross sales and rank SKUs from the very best to the bottom contributor.

Python Code

abc = sku_agg.copy()  # from Step 2, already sorted by TO desc
whole = abc["TO"].sum() or 1.0
abc["cum_turnover"] = abc["TO"].cumsum()
abc["cum_share"]    = abc["cum_turnover"] / whole             
abc["sku_rank"]     = vary(1, len(abc) + 1)
abc["cum_skus"]     = abc["sku_rank"] / len(abc)               
abc["cum_skus_pct"] = abc["cum_skus"] * 100                    

This step will likely be performed within the code node Pareto Evaluation:

const rows = gadgets
  .map(i => ())
  .type((a, b) => b.TO - a.TO);

const n = rows.size; // variety of ITEM
const totalTO = rows.cut back((s, r) => s + r.TO, 0) || 1;

We acquire the dataset gadgets from the earlier node

  1. For every row, we clear up the fields TO and QTY (in case we’ve lacking values)
  2. We type all SKUs by turnover in descending order.
  3. We retailer in variables the variety of gadgets and the full turnover
let cumTO = 0;
rows.forEach((r, idx) => {
  cumTO += r.TO;
  r.cum_turnover = cumTO;                     
  r.cum_share = +(cumTO / totalTO).toFixed(6); 
  r.sku_rank = idx + 1;
  r.cum_skus = +((idx + 1) / n).toFixed(6);   
  r.cum_skus_pct = +(r.cum_skus * 100).toFixed(2);
});

return rows.map(r => ({ json: r }));

Then we loop over all gadgets in sorted order.

  1. Use the variable cumTO to compute the cumulative contribution
  2. Add a number of Pareto metrics to every row:
  • cum_turnover: cumulative turnover as much as this merchandise
  • cum_share: cumulative share of turnover
  • sku_rank: rating place of the merchandise
  • cum_skus: cumulative variety of SKUs as a fraction of whole SKUs
  • cum_skus_pct: similar as cum_skus, however in %.

We’re then performed with the information preparation of the pareto chart.

Last outcomes – (Picture by Samir Saci)

This dataset will likely be saved within the worksheet Pareto by the node Replace Pareto Sheet.

And with a little bit of magic, we are able to generate this graph within the first worksheet:

Pareto Chart generated utilizing knowledge processed by the n8n workflow – (Picture by Samir Saci)

We are able to now proceed with the ABC XYZ chart.

Step 3: Calculate the demand variability and gross sales contribution

We may reuse the output of the pareto chart for the gross sales contribution, however we are going to contemplate every chart as unbiased.

I’ll cut up the code for the node Demand Variability’ and ‘Gross sales x Gross sales % into a number of segments for readability.

Block 1: outline capabilities for imply and normal deviation

perform imply(a) 1); 
perform stdev_samp(a){
  if (a.size <= 1) return 0;
  const m = imply(a);
  const v = a.cut back((s,x)=> s + (x - m) ** 2, 0) / (a.size - 1);
  return Math.sqrt(v);
}

These two capabilities will likely be used for the coefficient of variation (Cov)

  • imply(a): computes the common of an array.
  • stdev_samp(a): computes the pattern normal deviation

They take as inputs the each day gross sales distributions of every ITEM that we construct on this second block.

Block 2: Create the each day gross sales distribution of every ITEM

const collection = {};  // ITEM -> { day -> qty_sum }
let totalQty = 0;

for (const { json } of things) {
  const merchandise = String(json.ITEM);
  const day  = String(json.DAY);
  const qty  = Quantity(json.QTY || 0);

  if (!collection[item]) collection[item] = {};
  collection[item][day] = (collection[item][day] || 0) + qty;
  totalQty += qty;
}

Python Code

import pandas as pd
import numpy as np
df['QTY'] = pd.to_numeric(df['QTY'], errors='coerce').fillna(0)
daily_series = df.groupby(['ITEM', 'DAY'])['QTY'].sum().reset_index()

Now we are able to compute the metrics utilized to the each day gross sales distributions.

const out = [];
for (const [item, dayMap] of Object.entries(collection)) {
  const each day = Object.values(dayMap); // each day gross sales portions
  const qty_total = each day.cut back((s,x)=>s+x, 0);
  const m = imply(each day);               // common each day gross sales
  const sd = stdev_samp(each day);        // variability of gross sales
  const cv = m ? sd / m : null;        // coefficient of variation
  const share_qty_pct = totalQty ? (qty_total / totalQty) * 100 : 0;

  out.push({
    ITEM: merchandise,
    qty_total,
    share_qty_pct: Quantity(share_qty_pct.toFixed(2)),
    mean_qty: Quantity(m.toFixed(3)),
    std_qty: Quantity(sd.toFixed(3)),
    cv_qty: cv == null ? null : Quantity(cv.toFixed(3)),
  });
}

For every ITEM, we calculate

  • qty_total: whole gross sales
  • mean_qty: common each day gross sales.
  • std_qty: normal deviation of each day gross sales.
  • cv_qty: coefficient of variation (variability measure for XYZ classification)
  • share_qty_pct: % contribution to whole gross sales (used for ABC classification)

Right here is the Python model in case you have been misplaced:

abstract = daily_series.groupby('ITEM').agg(
    qty_total=('QTY', 'sum'),
    mean_qty=('QTY', 'imply'),
    std_qty=('QTY', 'std')
).reset_index()

abstract['std_qty'] = abstract['std_qty'].fillna(0)

total_qty = abstract['qty_total'].sum()
abstract['cv_qty'] = abstract['std_qty'] / abstract['mean_qty'].change(0, np.nan)
abstract['share_qty_pct'] = 100 * abstract['qty_total'] / total_qty

We’re practically performed.

We simply have to type by descending contribution to arrange for the ABC class mapping:

out.type((a,b) => b.share_qty_pct - a.share_qty_pct);
return out.map(r => ({ json: r }));

We now have for every ITEM, the important thing metrics wanted to create the scatter plot.

Output of the node Demand Variability x Gross sales % – (Picture by Samir Saci)

Solely the ABC lessons are lacking at this step.

Step 4: Add ABC lessons

We take the output of the earlier node as enter.

let rows = gadgets.map(i => i.json);
rows.type((a, b) => b.share_qty_pct - a.share_qty_pct);

Simply in case, we type ITEMS by descending by gross sales share (%) → most vital SKUs first.

(This step will be omitted as it’s usually already accomplished on the finish of the earlier code node.)

Then we are able to apply the category primarily based on hardcoded situations:

  • A: SKUs that collectively signify the primary 5% of gross sales
  • B: SKUs that collectively signify the following 15% of gross sales
  • C: The whole lot after 20%.
let cum = 0;
for (let r of rows) {
  cum += r.share_qty_pct;

  // 3) Assign class primarily based on cumulative %
  if (cum <= 5) {
    r.ABC = 'A';   // high 5%
  } else if (cum <= 20) {
    r.ABC = 'B';   // subsequent 15%
  } else {
    r.ABC = 'C';   // relaxation
  }

  r.cum_share = Quantity(cum.toFixed(2));
}

return rows.map(r => ({ json: r }));

This may be performed that approach utilizing Python Code.

df = df.sort_values('share_qty_pct', ascending=False).reset_index(drop=True)
df['cum_share'] = df['share_qty_pct'].cumsum()
def classify(cum):
    if cum <= 5:
        return 'A'
    elif cum <= 20:
        return 'B'
    else:
        return 'C'
df['ABC'] = df['cum_share'].apply(classify)

The outcomes can now be used to generate this chart, which will be discovered within the first sheet of the Google Sheet:

ABC XYZ Chart generated with the information processed by the workflow utilizing JavaScript – (Picture by Samir Saci)

I struggled (most likely attributable to my restricted data of Google Sheets) to discover a “handbook” resolution to create this scatter plot with the right color mapping.

Subsequently, I used a Google Apps Script out there within the Google Sheet to create it.

Script included within the Google Sheet to generate the visible – (Picture by Samir Saci)

As a bonus, I added extra nodes to the n8n template that carry out the identical kind of GroupBy to calculate gross sales by retailer or a pair of ITEM-store.

The experimental workflow we constructed collectively – (Picture by Samir Saci)

They can be utilized to create visuals like this one:

Whole Every day Gross sales Amount per Retailer – (Picture by Samir Saci)

To conclude this tutorial, we are able to confidently declare that the job is completed.

For a reside demo of the workflow, you’ll be able to take a look at this quick tutorial

Our clients, who run this workflow on their n8n cloud occasion, can now acquire visibility into every step of the information processing.

However at which value? Are we loosing in efficiency?

That is what we are going to uncover within the subsequent part.

Comparative Examine of Efficiency: n8n JavaScript Node vs. Python in FastAPI

To reply this query, I ready an easy experiment.

The identical dataset and transformations have been processed utilizing two totally different approaches inside n8n:

  1. All in JavaScript nodes with capabilities immediately inside n8n.
  2. Outsourcing to FastAPI microservices by changing the JavaScript logic with HTTP requests to Python endpoints.
Easy Workflow utilizing FastAPI microservice – (Picture by Samir Saci)

These two endpoints are linked to capabilities that can load the information immediately from the VPS occasion the place I hosted the microservice.

@router.publish("/launch_pareto")
async def launch_speedtest(request: Request):
    strive:
        session_id = request.headers.get('session_id', 'session')

        folder_in = f'knowledge/session/speed_test/enter'
        if not path.exists(folder_in):
                makedirs(folder_in)

        file_path = folder_in + '/gross sales.csv'
        logger.data(f"[SpeedTest]: Loading knowledge from session file: {file_path}")
        df = pd.read_csv(file_path, sep=";")
        logger.data(f"[SpeedTest]: Knowledge loaded efficiently: {df.head()}")

        speed_tester = SpeedAnalysis(df)
        output = await speed_tester.process_pareto()
        
        consequence = output.to_dict(orient="data")
        consequence = speed_tester.convert_numpy(consequence)
        
        logger.data(f"[SpeedTest]: /launch_pareto accomplished efficiently for {session_id}")
        return consequence
    besides Exception as e:
        logger.error(f"[SpeedTest]: Error /launch_pareto: {str(e)}n{traceback.format_exc()}")
        increase HTTPException(status_code=500, element=f"Did not course of Pace Check Evaluation: {str(e)}")
    
@router.publish("/launch_abc_xyz")
async def launch_abc_xyz(request: Request):
    strive:
        session_id = request.headers.get('session_id', 'session')

        folder_in = f'knowledge/session/speed_test/enter'
        if not path.exists(folder_in):
                makedirs(folder_in)

        file_path = folder_in + '/gross sales.csv'
        logger.data(f"[SpeedTest]: Loading knowledge from session file: {file_path}")
        df = pd.read_csv(file_path, sep=";")
        logger.data(f"[SpeedTest]: Knowledge loaded efficiently: {df.head()}")

        speed_tester = SpeedAnalysis(df)
        output = await speed_tester.process_abcxyz()
        
        consequence = output.to_dict(orient="data")
        consequence = speed_tester.convert_numpy(consequence)
        
        logger.data(f"[SpeedTest]: /launch_abc_xyz accomplished efficiently for {session_id}")
        return consequence
    besides Exception as e:
        logger.error(f"[SpeedTest]: Error /launch_abc_xyz: {str(e)}n{traceback.format_exc()}")
        increase HTTPException(status_code=500, element=f"Did not course of Pace Check Evaluation: {str(e)}")

I need to focus this check solely on the information processing efficiency.

The SpeedAnalysis contains all the information processing steps listed within the earlier part

  • Grouping gross sales by ITEM
  • Sorting ITEM by descending order and calculate cumulative gross sales
  • Calculating normal deviations and technique of gross sales distribution by ITEM
class SpeedAnalysis:
    def __init__(self, df: pd.DataFrame):
        config = load_config()
        
        self.df = df
        
    def processing(self):
        strive:
            gross sales = self.df.copy()
            gross sales = gross sales[sales['QTY']>0].copy()
            self.gross sales = gross sales

        besides Exception as e:
            logger.error(f'[SpeedTest] Error for processing : {e}n{traceback.format_exc()}')
            
    def prepare_pareto(self):
        strive:
            sku_agg = self.gross sales.copy()
            sku_agg = (sku_agg.groupby("ITEM", as_index=False)
             .agg(TO=("TO","sum"), QTY=("QTY","sum"))
             .sort_values("TO", ascending=False))
            
            pareto = sku_agg.copy()  
            whole = pareto["TO"].sum() or 1.0
            pareto["cum_turnover"] = pareto["TO"].cumsum()
            pareto["cum_share"]    = pareto["cum_turnover"] / whole              
            pareto["sku_rank"]     = vary(1, len(pareto) + 1)
            pareto["cum_skus"]     = pareto["sku_rank"] / len(pareto)               
            pareto["cum_skus_pct"] = pareto["cum_skus"] * 100
            return pareto                    
        besides Exception as e:
            logger.error(f'[SpeedTest]Error for prepare_pareto: {e}n{traceback.format_exc()}')
            
    def abc_xyz(self):
            each day = self.gross sales.groupby(["ITEM", "DAY"], as_index=False)["QTY"].sum()
            stats = (
                each day.groupby("ITEM")["QTY"]
                .agg(
                    qty_total="sum",
                    mean_qty="imply",
                    std_qty="std"
                )
                .reset_index()
            )
            stats["cv_qty"] = stats["std_qty"] / stats["mean_qty"].change(0, np.nan)
            total_qty = stats["qty_total"].sum()
            stats["share_qty_pct"] = (stats["qty_total"] / total_qty * 100).spherical(2)
            stats = stats.sort_values("share_qty_pct", ascending=False).reset_index(drop=True)
            stats["cum_share"] = stats["share_qty_pct"].cumsum().spherical(2)
            def classify(cum):
                if cum <= 5:
                    return "A"
                elif cum <= 20:
                    return "B"
                else:
                    return "C"
            stats["ABC"] = stats["cum_share"].apply(classify)
            return stats
        
    def convert_numpy(self, obj):
        if isinstance(obj, dict):
            return {okay: self.convert_numpy(v) for okay, v in obj.gadgets()}
        elif isinstance(obj, listing):
            return [self.convert_numpy(v) for v in obj]
        elif isinstance(obj, (np.integer, int)):
            return int(obj)
        elif isinstance(obj, (np.floating, float)):
            return float(obj)
        else:
            return obj

    async def process_pareto(self):
        """Principal processing perform that calls all different strategies so as."""
        self.processing()
        outputs = self.prepare_pareto()
        return outputs
    
    async def process_abcxyz(self):
        """Principal processing perform that calls all different strategies so as."""
        self.processing()
        outputs = self.abc_xyz().fillna(0)
        logger.data(f"[SpeedTest]: ABC-XYZ evaluation accomplished {outputs}.")
        return outputs

Now that we’ve these endpoints prepared, we are able to start testing.

Outcomes of the experimentation (High: Processing utilizing native code nodes / Backside: FastAPI Microservice) – (Picture by Samir Saci)

The outcomes are proven above:

  • JavaScript-only workflow: The entire course of was accomplished in a bit greater than 11.7 seconds.
    More often than not was spent updating sheets and performing iterative calculations inside n8n nodes.
  • FastAPI-backed workflow: The equal “outsourced” course of was accomplished in ~11.0 seconds.
    Heavy computations have been offloaded to Python microservices, which dealt with them sooner than native JavaScript nodes.

In different phrases, outsourcing advanced computations to Python truly improves the efficiency.

The reason being that FastAPI endpoints execute optimised Python capabilities immediately, whereas JavaScript nodes inside n8n should iterate (with loops).

For giant datasets, I’d think about a delta that’s most likely not negligible.

This demonstrates that you are able to do easy knowledge processing inside n8n utilizing small JavaScript snippets.

Nonetheless, our Provide Chain Analytics merchandise can require extra superior processing involving optimisation and superior statistical libraries.

AI Workflow for Manufacturing Planning Optimisation (Picture by Samir Saci)

For that, clients can settle for coping with a “black field” method, as seen within the Manufacturing Planning workflow introduced on this In the direction of Knowledge Science article.

However for gentle processing duties, we are able to combine them into the workflow to supply visibility to non-code customers.

For an additional mission, I take advantage of n8n to attach Provide Chain IT programs for the switch of Buy Orders utilizing Digital Knowledge Interchange (EDI).

Example of Electronic Data Interchange (EDI) Parsing Workflow – (Picture by Samir Saci)

This workflow, deployed for a small logistics firm, totally parses EDI messages utilizing JavaScript nodes.

Instance of Digital Knowledge Interchange Message – (Picture by Samir Saci)

As you’ll be able to uncover on this tutorial, we’ve carried out 100% of the Digital Knowledge Interchange message parsing utilizing JavaScript nodes.

This helped us to enhance the robustness of the answer and cut back our workload by handing over the upkeep to the client.

What’s the finest method?

For me, n8n must be used as an orchestration and integration instrument linked to our core analytics merchandise.

These analytics merchandise require particular enter codecs that will not align with our clients’ knowledge.

Subsequently, I’d advise utilizing JavaScript code nodes to carry out this preprocessing.

Workflow for Distribution Planning Optimisation Algorithm – (Picture by Samir Saci)

For instance, the workflow above connects a Google Sheet (containing enter knowledge) to a FastAPI microservice that runs an algorithm for Distribution Planning Optimisation.

The concept is to plug our optimisation algorithm right into a Google Sheet utilized by Distribution Planners to organise retailer deliveries.

Worksheet utilized by Planning Groups – (Picture by Samir Saci)

The JavaScript code node is used to rework the information collected from the Google Sheet into the enter format required by our algorithm.

By doing the job contained in the workflow, it stays beneath the management of the client who runs the workflow in their very own occasion.

And we are able to preserve the optimisation half in a microservice hosted on our occasion.

To higher perceive the setup, be at liberty to take a look at this quick presentation

I hope this tutorial and the examples above have given you adequate perception to know what will be performed with n8n by way of knowledge analytics.

Be at liberty to share your feedback in regards to the method and your ideas on what may very well be improved to boost the workflow’s efficiency with me.

About Me

Let’s join on Linkedin and Twitter. I’m a Provide Chain Engineer who makes use of knowledge analytics to enhance logistics operations and cut back prices.

For consulting or recommendation on analytics and sustainable provide chain transformation, be at liberty to contact me through Logigreen Consulting.

Discover your full information for Provide Chain Analytics: Analytics Cheat Sheet.

If you’re excited about Knowledge Analytics and Provide Chain, have a look at my web site.

Samir Saci | Data Science & Productivity

banner
Top Selling Multipurpose WP Theme

Converter

Top Selling Multipurpose WP Theme

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

banner
Top Selling Multipurpose WP Theme

Leave a Comment

banner
Top Selling Multipurpose WP Theme

Latest

Best selling

22000,00 $
16000,00 $
6500,00 $

Top rated

6500,00 $
22000,00 $
900000,00 $

Products

Knowledge Unleashed
Knowledge Unleashed

Welcome to Ivugangingo!

At Ivugangingo, we're passionate about delivering insightful content that empowers and informs our readers across a spectrum of crucial topics. Whether you're delving into the world of insurance, navigating the complexities of cryptocurrency, or seeking wellness tips in health and fitness, we've got you covered.