Docker is a modern technology for containerizing software. While packaging to a working version is relatively easy, creating a lightweight Docker image presents a challenge. In this note, I will briefly discuss an example from my university class on software engineering.
The task was to containerize a card game web application, written by students as a group project. The application was created with React.js and used to be run in a development mode (which supports tracing, debugging and so on) via npm start. The page can be then optimized for deployment via npm build. The job of a Docker container is to serve this optimized version with a simple http server.
Of course, the easiest way was to both page build and deployment within the Create React App framework. But the optimal way is to stay minimalistic and leverage multi-stage Docker builds with task-tailored sub-images: one to build and another to serve. The image size indeed gets reduced from 500Mb to about 20MB! To build and serve we used, respectively, a Node.js image and a Static Web Server image. The build product of the first step gets copied to the second image which provides a lightweight server, leaving unessential stuff (cached files, development tools ) behind. The implementation is shown below:
# docker build -t brydz:latest .# docker run -it -p 3000:3000 brydz:latest sh# stage 1: install dependencies and build the appFROM node:18-alpine AS pre_buildWORKDIR /brydzCOPY client/package.json ./client/package.jsonCOPY client/package-lock.json ./client/package-lock.jsonRUN npm install --prefix clientCOPY client/ ./client/ENV PUBLIC_URL=“.” RUN npm run build --prefix client# stage 2: move to a clean prod environmentFROM joseluisq/static-web-server:2-alpine AS finalWORKDIR /brydzCOPY --from=pre_build /brydz/client/build ./buildEXPOSE 3000CMD ["static-web-server", "--port=3000", "--root=build", "--log-level=trace"]
Two-player games can be solved by following a very intuitive algorithm called Regret Matching1. Players modify their action probabilities according to the so-called regrets or advantages, which can be thought as consequences of alternative choices. For a good overview of the topic, see the friendly yet detailed introduction by Neller and Lanctot 2.
The following code snippet demonstrates an efficient implementation for two-player zero-sum finite games in PyTorch, and tests on the game called Rock-Paper-Scissors with Crash. In this variant, the scissors crashes against rock, and the loss is doubled. This shifts the equilibrium choices in favour of “paper”, at the exact proportion of 25%/50%/25%. Remarkably, regrets can be used to estimate the approximation accuracy, which approaches the perfect solution under mild conditions 3. This is also demonstrated in the snippet.
import torch## Game: Rock-Paper-Scissors with Crash, the equilibrium achieved with 25%/50%/25%N_ACTIONS =3# 0: rock, 1: paper, 2: scissorsPAYOFF = torch.tensor([[0,-1,2],[1,0,-1],[-2,1,0]],dtype=torch.float).cuda()# gain of row player / loss of col player## UtilsdefgetStrategy(cfr):"""Return a strategy corresponding to observed regrets Args: cfr (array): counter-factual regret, shape (N_ACTIONS,) Returns: weights (array): strategy, shape (N_ACTIONS,)""" weights = torch.clip(cfr,0,torch.inf) weights = weights/torch.sum(weights) N_ACTIONS = weights.shape[0] weights = torch.nan_to_num(weights,nan=1/N_ACTIONS)return weights#@torch.jit.script@torch.compile(mode='max-autotune')defgetEquilibrium(PAYOFF,N_ITER:int=500,stochastic_advantage:bool=False):# auxiliary variables N_ACTIONS = PAYOFF.shape[0] cumCFR1 = torch.zeros(N_ACTIONS,).cuda() cumCFR2 = torch.zeros(N_ACTIONS,).cuda() cumStrategy1 = torch.zeros(N_ACTIONS,).cuda() cumStrategy2 = torch.zeros(N_ACTIONS,).cuda() strategy1 = torch.ones(N_ACTIONS,).cuda()/N_ACTIONS strategy2 = torch.ones(N_ACTIONS,).cuda()/N_ACTIONS# training loopfor _ inrange(N_ITER):# sample actions and observe regretsif stochastic_advantage:# a) stochastic variant, often implemented in tutorials action1 = torch.multinomial(strategy1,num_samples=1).squeeze() action2 = torch.multinomial(strategy2,num_samples=1).squeeze() cfr1 = PAYOFF[:,action2]-PAYOFF[action1,action2] cfr2 =-(PAYOFF[action1,:]-PAYOFF[action1,action2])else:# b) averaged variant PAYOFF_avg = strategy1.view(1,-1).mm(PAYOFF).mm(strategy2.view(-1,1)) cfr1 =(PAYOFF.mm(strategy2.view(-1,1))-PAYOFF_avg).squeeze() cfr2 =(strategy1.view(1,-1).mm(PAYOFF)-PAYOFF_avg).squeeze()*(-1)# update strategies proportionally to regrets strategy1 =getStrategy(cumCFR1) strategy2 =getStrategy(cumCFR2)# track cumulated regrets and strategies cumCFR1 += cfr1 cumCFR2 += cfr2 cumStrategy1 += strategy1 cumStrategy2 += strategy2# averaged strategies converge to Nash Equilibrium avgStrategy1 = cumStrategy1/cumStrategy1.sum() avgStrategy2 = cumStrategy2/cumStrategy2.sum()# estimate approximation error (upper bound) eps =2*torch.max(cumCFR1.max(),cumCFR2.max())/N_ITER torch.cuda.synchronize()return(avgStrategy1,avgStrategy2,eps)getEquilibrium(PAYOFF)# eps < 0.03
Regrets are expensive to compute in large games, but can be approximated using modern machine-learning techniques. This approach has recently found many applications, including solvers for Poker and even larger card games 4,5 .
1.
Hart S, Mas-Colell A. A Simple Adaptive Procedure Leading to Correlated Equilibrium. Econometrica. Published online September 2000:1127-1150. doi:10.1111/1468-0262.00153
Waugh K. Abstraction in Large Extensive Games. University of Alberta Libraries. Published online 2009. doi:10.7939/R3CM74
4.
Brown N, Lerer A, Gross S, Sandholm T. Deep Counterfactual Regret Minimization. In: ; 2019.
5.
Adams D. The Feasibility of Deep Counterfactual Regret Minimisation for Trading Card Games. AI 2022: Advances in Artificial Intelligence. Published online 2022:145-160. doi:10.1007/978-3-031-22695-3_11
When evaluating computing performance we look at various KPIs: memory consumption, utilisation of compute power, occupation of hardware accelerators, and – more recently – at the energy consumption and energy efficiency 1,2. For popular NVIDIA cards this can be solved with the help of the NVIDIA Management Library, which allows developer to query details of the device state3.
The library is easier to use through Python bindings available as pyNVML4. Note that Python overheads may be problematic if higher-frequency querying is needed, plus the API likely comes with its own overheads. So the readings should be understood as estimations.
Here is a simple script, which can be adjusted to query more details, if needed:
# see the NVIDIA docs: https://docs.nvidia.com/deploy/nvml-api/group__nvmlDeviceQueries.html#group__nvmlDeviceQueries# to monitor GPU-1 and dump to a log file, run: python gpu_trace.py 1 log.csvimport sysimport timeimport pynvmlpynvml.nvmlInit()if __name__ =="__main__": gpu_index =int(sys.argv[1])# device fname = sys.argv[2]# log filewithopen(fname,'w')as f:# select device device_handle = pynvml.nvmlDeviceGetHandleByIndex(gpu_index)# prepare headers f.write('Timestamp;Temperature [C];Power [% max];GPU Util [% time];Mem Util [% time];Mem Cons [% max];Energy [kJ]\n')# get some metadata power_max = pynvml.nvmlDeviceGetPowerManagementLimit(device_handle) energy_start = pynvml.nvmlDeviceGetTotalEnergyConsumption(device_handle)whileTrue:# timestamp timestamp = time.time()# temperature temp = pynvml.nvmlDeviceGetTemperature(device_handle,0)# TODO: set sensor if many?# power [% of max] power = pynvml.nvmlDeviceGetPowerUsage(device_handle)/ power_max *100.0# memory and gpu utilisation [%] util = pynvml.nvmlDeviceGetUtilizationRates(device_handle)# memory consumption [%] mem_info = pynvml.nvmlDeviceGetMemoryInfo(device_handle) mem_cons = mem_info.used / mem_info.total *100.0# eneregy delta in kJ (API uses in mJ) eneregy =(pynvml.nvmlDeviceGetTotalEnergyConsumption(device_handle)-energy_start)/10**6# output result result =(timestamp,temp,power,util.gpu,util.memory,mem_cons,eneregy) f.write(';'.join(map(str, result))+'\n') time.sleep(0.1)
And here is how to post-process and present results:
The example shown below comes from an ETL processes which utilizes a GPU.
Note that, in this case, monitoring identified likely bottlenecks: the GPU gets idle on a periodic basis (likely, device-to-host transfers) plus is overall underutilised. Estimation of energy consumed is a nice feature, as it would be hard to measure it accurately from power traces (due to high variation and subsampling).
Note that utilisation should be understood as time-occupation, in case of both memory and computing. From the documentation:
unsigned int gpu: Percent of time over the past sample period during which one or more kernels was executing on the GPU. unsigned int memory : Percent of time over the past sample period during which global (device) memory was being read or written.
In this example, we see different power management strategies on two similar devices:
Trace on device 2Trace on device 1
Case Study 3: Energy Efficiency of Deep Learning
Here we reproduce some results from Tang et al.1 to illustrate how adjusting frequency can be used to minimise energy spent per computational task (in their case: image prediction). Higher performance comes at a price of excessive energy used, so that energy curves assumes a typical parabolic shape. Note that, in general, the energy-efficient configuration may be optimised over both clock and memory frequencies5.
And here is the code to reproduce:
import pandas as pdimport seaborn as snsimport numpy as np# source: Fig 4d, data for resnet-b32 from "The Impact of GPU DVFS on the Energy and Performance of Deep Learning: an Empirical Study" freq =[544,683,810,936,1063,1202,1328]power =[57,62,65,70,78,88,115]# W = J/srequests =[60,75,85,95,105,115,120]# requests/sdata = pd.DataFrame(data=zip(freq,power,requests),columns=['Frequency','Power','Performance'])data['Energy']= data['Power']/ data['Performance']# [J/s] / [Images/s] = [J/Image]import matplotlib.pyplot as pltfig,(ax1,ax2)= plt.subplots(1,2,figsize=(12,6))sns.lineplot(data=data,x='Frequency',y='Performance',ax=ax1,color='orange',label='Performance',marker='o')ax1.set_xticks(data['Frequency'])ax1.set_ylabel('Image / s')ax1.set_xlabel('Frequency [MHz]')ax1.legend(loc=0)ax12 = ax1.twinx()sns.lineplot(data=data,x='Frequency',y='Power',ax=ax12,color='steelblue',label='Power',marker='D')ax12.set_ylabel('W')ax12.legend(loc=1)sns.lineplot(data,x='Frequency',y='Energy',ax=ax2,label='Energy')ax2.set_xticks(data['Frequency'])ax2.set_ylabel('J / Image')ax2.set_xlabel('Frequency [MHz]')ax2.legend(loc=0)plt.title('Performance, power, and energy for training of resnet-b32 network on P100.\n Reproduced from: "The Impact of GPU DVFS on the Energy and Performance of Deep Learning: an Empirical Study"')plt.tight_layout()plt.show()
References
1.
Tang Z, Wang Y, Wang Q, Chu X. The Impact of GPU DVFS on the Energy and Performance of Deep Learning. Proceedings of the Tenth ACM International Conference on Future Energy Systems. Published online June 15, 2019. doi:10.1145/3307772.3328315
2.
Tang K, He X, Gupta S, Vazhkudai SS, Tiwari D. Exploring the Optimal Platform Configuration for Power-Constrained HPC Workflows. 2018 27th International Conference on Computer Communication and Networks (ICCCN). Published online July 2018. doi:10.1109/icccn.2018.8487322
Fan K, Cosenza B, Juurlink B. Accurate Energy and Performance Prediction for Frequency-Scaled GPU Kernels. Computation. Published online April 27, 2020:37. doi:10.3390/computation8020037
Staring from the version 2.x PyTorch, a popular deep-learning framework, introduces a JIT compiler torch.compile. In this post, I am sharing a non-trivial example demonstrating how this tool can reduce memory footprint on GPU. The point of departure is a sub-routine which computes similarity, similar to covariance but not as friendly to compute.
For two tensors of shape \( (n_{samples},n_{dim})\) it produces a similarity tensor of shape \( (n_{dim},n_{dim})\). However, the logic uses broadcasting when constructing and reducing an intermediate tensor of shape \( (n_{samples},n_{dim},n_{dim})\). Thus, the naive implementation takes \( O(n_{samples}\cdot n_{dim}^2)\) of memory which is seen from the profiler. After compilation, this bottleneck is removed 💪
This is how the profiling code looks like:
import torchfrom torch.profiler import profile, record_function, ProfilerActivityx = torch.randn((256,2000)).float().cuda()torch.cuda.synchronize()#@torch.compile(mode='max-autotune') # compare the effect with and without !defsimiliarity(x,y): xy = x[:,:,None]-y[:,None,:] xy = xy.abs().lt(1).sum(axis=0) xy = xy.to('cpu')return xywithprofile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],profile_memory=True)as prof:withrecord_function("memory profile"):similiarity(x,x) torch.cuda.synchronize()profiler_summary = prof.key_averages().table(sort_by="self_cuda_memory_usage",row_limit=10)
And this is a useful utility to convert the profiling output to table
# torch profiler output to pandasimport pandas as pdimport ioimport retotal_width = re.search('\n',profiler_summary).start()widths =[t.end()-t.start()+2for t in re.finditer('-{1,}',profiler_summary[:total_width])]df = pd.read_fwf(io.StringIO(profiler_summary),widths=widths)df.columns = df.loc[0]df.drop([0,1],axis=0,inplace=True)df.set_index(df.columns[0],inplace=True)df.head(10)
This is the output without compiler, note huge memory excess in tensor operations while broadcasting:
Location on maps are often provided in a Coordinate Reference System (CRS). In computer vision projects, however, we need to translate them to pixels. Plotting and transformations can be accomplished with the rasterio Python module.
Below I share a useful snippet showing how to convert CRS locations to pixel coordinates:
import matplotlib.pyplot as pltimport geopandasimport rasterio as rsfrom rasterio.plot import showimport pandas as pdfrom shapely.geometry import Pointfrom PIL import Image, ImageDrawfig, axs = plt.subplots(1,2)# open the image and its annotationsimg_path ="trees_counting_mount/polish_ortophoto/1_2000/images/66579_623608_8.135.12.19_cropped_2000.tif"label_path ="trees_counting_mount/polish_ortophoto/1_2000/annotations/66579_623608_8.135.12.19_cropped_2000.csv"raster = rs.open(img_path,crs="EPSG:2180")label_df = pd.read_csv(label_path)# extract selected pointsgeometry =list(map(Point, label_df[["x","y"]].values))idxs =[0,100,200]Ps =[geometry[idx]for idx in idxs]# plot points in two alternative ways!ax = axs[0]show(raster,ax=ax)# variant 1: plot geo-coordinates with geopandas.GeoDataFramegeo_df = geopandas.GeoDataFrame(None,geometry=Ps,crs="EPSG:2180")geo_df.plot(ax=ax)ax = axs[1]# variant 2: convert geo-coordinates to pixel locationsys, xs = rs.transform.rowcol(raster.transform,[P.x for P in Ps],[P.y for P in Ps])img = Image.open(img_path)img_draw = ImageDraw.Draw(img)for x, y inzip(xs, ys): img_draw.ellipse((x-150,y-150,x+150,y+150),fill="yellow")plt.imshow(img)plt.title("Points in geo-coordinates (left) and image pixel coordinates (right).")plt.tight_layout()plt.show()
CUDA is a computing platform for graphical processing units (GPUs) developed by NVIDIA, widely used to accelerate machine-learning. Existing frameworks, such as Tensorflow or PyTorch, utilize it under the hood not asking user for any specific coding. However, it is still necessary to set its dependencies, particularly the compiler nvcc, properly to benefit of acceleration. In this short note, I share an interesting use-case that occurred when prototyping on Kaggle Docker image and NVIDIA Docker image.
Compatibility of CUDA tools and targeted libraries
It turns out that one of Kaggle images was released with incompatible CUDA dependencies: compilation tools were not aligned with PyTorch, as revealed when attempting to compile detectron2, an object detection library by Facebook.
(base) maciej.skorski@shared-notebooks:~$ docker imagesREPOSITORYTAGIMAGEIDCREATEDSIZEgcr.io/kaggle-gpu-images/pythonlatest87983e20c2904weeksago48.1GBnvidia/cuda11.6.2-devel-ubuntu20.04e1687ea9fbf27weeksago5.75GBgcr.io/kaggle-gpu-images/python<none>2b12fe42f3722monthsago50.2GB(base) maciej.skorski@shared-notebooks:~$ docker run -d \-it\--namekaggle-test\--runtime=nvidia\--mounttype=bind,source=/home/maciej.skorski,target=/home\2b12fe42f372(base) maciej.skorski@shared-notebooks:~$ docker exec -it kaggle-test python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'...RuntimeError:ThedetectedCUDAversion (12.1) mismatches the version that was used to compilePyTorch (11.8). Please make sure to use the same CUDA versions.
In order to compile detectron2, it was necessary to align the CUDA toolkit version. Rather than trying to install it manually – which is known to be an error-prone task – a working solution was to change the Kaggle image. It turns out that the gap was bridged in a subsequent release:
(base) maciej.skorski@shared-notebooks:~$ docker run 87983e20c290 nvcc --versionnvcc:NVIDIA (R) Cuda compiler driverCopyright (c) 2005-2022 NVIDIA CorporationBuiltonWed_Sep_21_10:33:58_PDT_2022Cudacompilationtools,release11.8,V11.8.89Buildcuda_11.8.r11.8/compiler.31833905_0(base) maciej.skorski@shared-notebooks:~$ docker run 2b12fe42f372 nvcc --versionnvcc:NVIDIA (R) Cuda compiler driverCopyright (c) 2005-2023 NVIDIA CorporationBuiltonMon_Apr__3_17:16:06_PDT_2023Cudacompilationtools,release12.1,V12.1.105Buildcuda_12.1.r12.1/compiler.32688072_0
And indeed, the Facebook library installed smoothly under the new image 👍
Consider the simple CUDA script querying the GPU device properties:
// query_GPU.cu#include<stdio.h>intmain(){int nDevices;cudaGetDeviceCount(&nDevices);for(int i =0; i < nDevices; i++){ cudaDeviceProp prop;cudaGetDeviceProperties(&prop, i);printf("Device Number: %d\n", i);printf(" Name: %s\n",prop.name);printf(" Integrated: %d\n",prop.integrated);printf(" Compute capability: %d.%d\n",prop.major,prop.minor);printf(" Peak Memory Bandwidth (GB/s): %f\n\n",2.0*prop.memoryClockRate*(prop.memoryBusWidth/8)/1.0e6);printf(" Total global mem: %ld\n",prop.totalGlobalMem);printf(" Multiprocessor count: %d\n",prop.multiProcessorCount);}}
This code compiles and presents GPU properties only under the image equipped with the matching major compiler version (select the appropriate image here):
As the last example, consider the recent cuZK project which implements some state-of-the-art cryptographic protocols on GPU. The original code was missing dependencies and compilation instructions, therefore I shared a working fork version.
To work with the code, let’s use the NVIDIA Docker image with the appropriate version, here I selected the tag 11.6.2-devel-ubuntu20.04. Checkout the code and start a container mounting the working directory with the GitHub code, like below:
Who does not enjoy lego bricks, raise a hand! In this post, I am sharing an elegant and efficient way of plotting bricks under 3d view in TikZ. Briefly speaking, it utilizes canvas transforms to plot facets, and describes boundaries of studs in a simple way with cylindrical coordinates based on the azimuth angle (localizing extreme edges might be a challenge on its own). While there are other packages, like TikZbricks, this method seems simpler in terms of complexity and brings some educational value in terms of cylinders geometry.
\documentclass[12pt]{standalone}\usepackage{pgfplots}\usepackage{tikz-3dplot}\begin{document}\pgfmathsetmacro{\pinradius}{0.25}% elevation and azimuth for 3D-view\def\rotx{60}\def\rotz{120}\newcommand{\brick}[8]{\pgfmathsetmacro{\posx}{#1}\pgfmathsetmacro{\posy}{#2}\pgfmathsetmacro{\posz}{#3}\pgfmathsetmacro{\cubex}{#4}\pgfmathsetmacro{\cubey}{#5}\pgfmathsetmacro{\cubez}{#6}% cube by rectangle facets\begin{scope}\begin{scope}[canvas is yx plane at z=\posz,transform shape]\draw[fill=#8] (\posy,\posx) rectangle ++(\cubey,\cubex);\end{scope}\begin{scope}[canvas is yx plane at z=\posz+\cubez,transform shape]\draw[fill=#8] (\posy,\posx) rectangle ++(\cubey,\cubex);\end{scope}\begin{scope}[canvas is yz plane at x=\posx+\cubex,transform shape]\draw[fill=#8] (\posy,\posz) rectangle ++(\cubey,\cubez) node[pos=.5] {#7};\end{scope}\begin{scope}[canvas is xz plane at y=\posy+\cubey,transform shape]\draw[fill=#8] (\posx,\posz) rectangle ++(\cubex,\cubez);\end{scope}\end{scope}% studs by arcs and extreme edges\foreach\i in {1,...,\cubey}{\foreach\j in {1,...,\cubex}{% upper part - full circle\draw[thin] (\posx-0.5+\j,\posy-0.5+\i,\posz+\cubez+0.15) circle (\pinradius);% lower part - arc\begin{scope}[canvas is xy plane at z=\posz+\cubez]\draw[thin] ([shift=(\rotz:\pinradius)]\posx-0.5+\j,\posy-0.5+\i) arc (\rotz:\rotz-180:\pinradius);\end{scope}\begin{scope}[shift={(\posx-0.5+\j,\posy-0.5+\i)}]% edges easily identified in cylindrical coordinates! \pgfcoordinate{edge1_top}{ \pgfpointcylindrical{\rotz}{\pinradius}{\posz+\cubez+0.15} };\pgfcoordinate{edge1_bottom}{ \pgfpointcylindrical{\rotz}{\pinradius}{\posz+\cubez} };\draw[] (edge1_top) -- (edge1_bottom);\pgfcoordinate{edge1_top}{ \pgfpointcylindrical{\rotz+180}{\pinradius}{\posz+\cubez+0.15} };\pgfcoordinate{edge1_bottom}{ \pgfpointcylindrical{\rotz+180}{\pinradius}{\posz+\cubez} };\draw[] (edge1_top) -- (edge1_bottom);\end{scope} } }}\tdplotsetmaincoords{\rotx}{\rotz}\begin{tikzpicture}[tdplot_main_coords,]% draw axes\coordinate (O) at (0,0,0);\coordinate (A) at (5,0,0);\coordinate (B) at (0,5,0);\coordinate (C) at (0,0,5);\draw[-latex] (O) -- (A) node[below] {$x$};\draw[-latex] (O) -- (B) node[above] {$y$};\draw[-latex] (O) -- (C) node[left] {$z$};% draw brick\brick{0}{1}{0}{3}{3}{1}{Lego}{blue!50};\brick{0}{1}{2}{2}{3}{1}{Enjoys}{green!50};\brick{0}{1}{4}{1}{3}{1}{Everybody}{red!50};\end{tikzpicture}\end{document}
Drawing cylinders in vector graphic is a common task. It is less trivial as it looks at first glance, due to the challenge of finding a proper projection. In this post, I share a simple and robust recipe using the tikz-3dplot package of LaTeX. As opposed to many examples shared online, this approach automatically identifies the boundary of a cylinder, under a given perspective. The trick is to identify edges using the azimuth angle in cylindrical coordinates 💪.
In object-oriented programming, there are plenty of accessors and mutators to test. This post demonstrates that this effort can be automated with reflection 🚀. The inspiration came from discussions I had with my students during our software-engineering class: how to increase code coverage without lots of manual effort? 🤔
Roughly speaking, the reflection mechanism allows the code to analyse itself. At runtime, we are able to construct calls based on extracted class properties. The idea is not novel, see for instance this gist. To add the value and improve the presentation, I modernized and completed the code to a fully featured project on GitHub with CI/CD on GitHub Actions and Code Coverage connected 😎.
Here is how the testing class looks like. Java reflection accesses classes, extracts fields and their types and constructs calls with type-matching values accordingly:
// tester classclassAutoTests{privatestaticfinalClass[]classToTest=newClass[]{// the list of classes to testPersonClass.class,AnimalClass.class};@TestpublicvoidcorrectGettersSetters(){for(ClassaClass: classToTest){Objectinstance;try{ instance =aClass.getDeclaredConstructor().newInstance();Field[]declaredFields=aClass.getDeclaredFields();for(Fieldf: declaredFields){// get the field getter and setter, following the Java naming convention (!)// www.theserverside.com/feature/Java-naming-conventions-explainedStringname=f.getName(); name =name.substring(0,1).toUpperCase()+name.substring(1);StringgetterName="get"+ name;StringsetterName="set"+ name;MethodgetterMethod=aClass.getMethod(getterName);MethodsetterMethod=aClass.getMethod(setterName,getterMethod.getReturnType());// prepare a test value based on the filed type ObjecttestVal=null;Class<?>fType=f.getType();if(fType.isAssignableFrom(Integer.class)){ testVal =1234;}elseif(fType.isAssignableFrom(String.class)){ testVal ="abcd";}// test by composing the setter and gettersetterMethod.invoke(instance, testVal);Objectresult=getterMethod.invoke(instance);System.out.printf("Testing class=%s field=%s...\n",aClass.getName(),f.getName());assertThat(result).as("in class %s fields %s",aClass.getName(),f.getName()).isEqualTo(testVal);}}catch(Exceptione){System.out.println(e.toString());}}}}
In my recent open source contribution I enabled callbacks in the scalable (multi-core) implementation of Latent Dirichlet Alocation in the gensim library 1. This will, in turn, allow users for faster and more accurate turning of the popular topic extraction model.
An obvious use case is monitoring and early stopping of training, with popular coherence metrics such as \(U_{mass}\) and \(C_V\) 2. On the News20Group dataset, the training performance looks as follows:
Training performance of Multi-Core LDA on 20 Newsgroups data, monitored by callbacks.
The achieved scores are decent, actually better than reported in the literature3 – but this may be due to preprocessing not early stopping. A full example is shared in this Kaggle notebook.
1.
R Rehr Uv Rek, P Sojka. Software Framework for Topic Modelling with Large Corpora. Unpublished. Published online 2010. doi:10.13140/2.1.2393.1847
2.
Röder M, Both A, Hinneburg A. Exploring the Space of Topic Coherence Measures. Proceedings of the Eighth ACM International Conference on Web Search and Data Mining. Published online February 2, 2015. doi:10.1145/2684822.2685324
3.
Zhang Z, Fang M, Chen L, Namazi Rad MR. Is Neural Topic Modelling Better than Clustering? An Empirical Study on Clustering with Contextual Embeddings for Topics. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Published online 2022. doi:10.18653/v1/2022.naacl-main.285