This repository contains the code and data to reproduce the results in our paper, An Integer Programming Approach for Quick-Commerce Assortment Planning. The online manuscript is available on arXiv.
conda create --name myenv python=3.11This project was developed and tested using Python 3.11, which serves as the reference environment. Newer versions have not been fully tested and may introduce unexpected issues. For best results, use Python 3.11.
To remove this Python environment:
conda remove --name myenv --allconda activate myenvpip install -r requirements.txtThe requirements.txt file is located in the project root folder.
Note: Before proceeding, you must obtain a license for the commercial solver Gurobi.
I have prepared the dataset used in Table 2 of Section 6 (Computational Experiments). You can find it in ./DataSet/ or download the dataset. The dataset file is over 200 MB in size. Due to GitHub's size limit, if you encounter download issues, please contact the author via email.
After downloading and unpacking the dataset file, place the .pkl file into the ./DataSet/ folder. (See data structure introduction for more details.)
Alternatively, you can generate the dataset randomly using:
python data_generate.py 36 # repeatNumThis will generate the dataset and save it to ./DataSet/. The parameter 36 represents the number of repetitions.
Run run_plain_repeat.py to replicate Table 2. This script will:
- Load the instance data
- Create a folder named
output_plain_repeat - Run the instances
- Save solution details as
SolutionDict_InfoDict_dataOptionDict_repeat_<timestamp>_<r>.pkl(wheretimestamphas the formatyyyy-mm-dd-hh-mm-ssandris the repeat number) - Save a table named
Tables_ReportTable1_<timestamp>_<r>.xlsxcontaining detailed results for each repeat - Save a table named
Tables_ReportTable1_<timestamp>_runtime_stat_<r>.xlsxcontaining runtime statistics across allrrepeats - Save a table named
Tables_ReportTable1_<timestamp>_agg<r>.xlsxcontaining aggregated results acrossrrepeats
Usage:
python run_plain_repeat.py 100 2 12 # timelimit, K, repeatNumNote: This will take more than 240 hours if testing on instances with the same size and repetition count as Table 2.
Run plot_perf_profile.py to plot the performance profile figure:
- You should indicate the
filepathto the folder containing the solution files (.pklfiles) - The performance profile figure will be saved in the
Outputfolder
python plot_perf_profile.pySee EC.4 for details.
Run varying_v0_a0.bash in the terminal to plot and save the assortment map figures as shown in EC.4.1:
#!/bin/bash
# varying_v0_a0.bash
# Varying v0 (the utility of the outside option for the online segments)
python data_generate_for_varying_v0.py 100 50 5 # generate the required dataset
python run_varying_a0_v0.py 10.0 "varying_v0"
python plot_assortment_map.py "varying_v0"
# Varying a0 (α₀, the arrival ratio of the offline segment)
python data_generate_for_varying_a0.py 100 50 5 # generate the required dataset
python run_varying_a0_v0.py 10.0 "varying_a0"
python plot_assortment_map.py "varying_a0"Since both EC.4.2 and EC.4.3 involve varying the arrival ratio (α₀) and the non-purchase utility (u₀), they are combined here.
Run joint_benefit_luce_loss.bash in the terminal to plot and save the box charts shown in EC.4.2 and EC.4.3:
#!/bin/bash
# joint_benefit_luce_loss.bash
# Generate data
python data_generate_for_varying_v0_repeat.py 100 50 5 3 # n, m, linspaceNumber, repeatNumber
python data_generate_for_varying_a0_repeat.py 100 50 5 3 # n, m, linspaceNumber, repeatNumber
# Run for given data
python run_joint_benefit_luce_loss.py 10 "varying_v0"
python run_joint_benefit_luce_loss.py 10 "varying_a0"
# Plot
python plot_joint_benefit_luce_loss.py "varying_v0"
python plot_joint_benefit_luce_loss.py "varying_a0"Note: This will take more than 100 hours if testing on instances with the same size and repetition count as EC.4.2 and EC.4.3.
What is the influence of different K values in Algorithm 2 (where K is the number of cutting-plane generation rounds)?
See main results and discussion in EC.5.1.
run_different_K.py: Implements different K values when calling Algorithm 2. Results (.pklfiles) will be saved in subfolders ofoutput_different_K/<timestamp>_compare_different_Kfor each K value.plot_objChangewithK.py: Generates the figure in EC.5.1.1 showing the impact of different K values on the reduced gap. Figures with filenames starting withgapReducedChangewithK_are saved in theOutputfolder.plot_timeChangewithK.py: Generates the figure in EC.5.1.2 showing the impact of different K values on computational performance. Figures with filenames starting withtimeChangewithK_are saved in theOutputfolder.
#!/bin/bash
# different_K.bash
python run_different_K.py 100 # timelimit (seconds), you can set it as 3600
python plot_objChangewithK.py
python plot_timeChangewithK.pyNote: This will take more than 100 hours if testing on instances with the same size and repetition count as EC.5.1.
Run run_callback.py to implement the callback via Gurobi. See details in EC.5.2.
This will generate the table reported in EC.5.2. The table is saved in a subfolder with a filename ending in _callback.xlsx.
python run_callback.pyNote: This will take more than 100 hours if testing on instances with the same size and repetition count as the table in EC.5.2.
See EC.5.3 and the joint assortment and personalization problem (El Housni and Topaloglu 2023).
data_generation_for_CAP.py: Generates data according to EC.5.3. The data (.pklfile) will be saved to the dataset folder.run_compare_with_customization.py: Tests the joint assortment and personalization problem (El Housni and Topaloglu 2023). See details in EC.5.3. This generates the table reported in EC.5.3. The table is saved in a subfolder with a filename ending in_runtime_stat<r>.xlsx.
python data_generation_for_CAP.py
python run_compare_with_customization.pyNote: If everything goes well, this comparison should complete within 5 hours.
The detailed algorithm for the Monte Carlo simulation is provided in the online appendix of our paper.
run_inventory_simulation.py: Python implementation that automatically generates the corresponding tables.
The detailed algorithm for the simple Revenue-Ordered heuristic method is first introduced in Section 2.2. The improved heuristic method is outlined as an algorithm in EC.5.5.
The implementation is incorporated into BuildModels.py.
data_generate_for_RO_heuristics_comparison.py: Generates the instances for the results in EC.5.5.run_compare_heuristics.py: Generates results in the./output_compare_heuristics_SOfolder. You should update the filename to point to the correct instance.pklfile.
You can also transfer and save the .pkl dataset to .txt format using data_save_to_txt.py. For details, please see the script.
Feel free to reach out if you're interested in contributing additional features to this project or if you find any errors in the code.