The Mysterious UITableView Case



We have recently converted our project from Swift 2.3 to Swift 3. And everything was good so far until I didn’t have to create new screen that contained Table View. Pretty basic task, huh? Well, one problem appeared and I literally spent one hour trying to resolve this.

Ok let me get to the description of the case itself.

Case description

Below you can find very basic example of code that illustrates the issue:

import UIKit

class ViewController: UIViewController {
     @IBOutlet weak var tableView: UITableView!

     override func viewDidLoad() {

          let view = UIView(frame: CGRect(x: 0, y: 0, width: 0, height: .min))
          self.tableView.tableHeaderView = view
          self.tableView.tableFooterView = view

          self.tableView.dataSource = self
          self.tableView.delegate = self


extension ViewController: UITableViewDataSource {
    func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
        return 5

     func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
         var cell: UITableViewCell!

         if let newCell = tableView.dequeueReusableCell(withIdentifier: "WHAT IS THE KAPPA") {
              cell = newCell
         } else {
              cell = UITableViewCell.init(style: .default, reuseIdentifier: "WHAT IS THE KAPPA")

         cell.textLabel?.text = "O_O"
         return cell

extension ViewController: UITableViewDelegate {
     func tableView(_ tableView: UITableView, heightForRowAt indexPath: IndexPath) -> CGFloat {
        return 44.0

Seems to be ok, builds without errors and even without warnings, what can go wrong, huh?

Well, if you actually build and run this code you will get something like this:screen-shot-2017-03-02-at-5-19-47-pm

Erm, where did the Table View go?


I met this issue and spent like 1 hour debugging and searching, what can cause such problem. Basically, if you will try to debug this code you will see that functions:

    func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
        return 5
     func tableView(_ tableView: UITableView, heightForRowAt indexPath: IndexPath) -> CGFloat {
        return 44.0

are actually getting called. However, function:

func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
         var cell: UITableViewCell!

         if let newCell = tableView.dequeueReusableCell(withIdentifier: "WHAT IS THE KAPPA") {
              cell = newCell
         } else {
              cell = UITableViewCell.init(style: .default, reuseIdentifier: "WHAT IS THE KAPPA")

         cell.textLabel?.text = "O_O"
         return cell

is not fired even one time.


Basically, solution came to me in pretty random way. I looked up on the setup of Table View and saw following:

let view = UIView(frame: CGRect(x: 0, y: 0, width: 0, height: .min))
self.tableView.tableHeaderView = view
self.tableView.tableFooterView = view

Seems rather normal, doesn’t it? Well, one thing caught my eye on this – it’s small dirty “.min” at the end of the first line.

I thought: “Wasn’t it changed to something else?”. Fast search in the codebase give that “.min” was changed to “CGFloat.leastNormalMagnitude”. Let’s try to change “.min” to “CGFloat.leastNormalMagnitude” in the line above and run it:


The most confusing thing for me was compiler wasn’t complaining about it at all and it somehow prevented cellForRow being called, while other function were working just fine.

I hope that it might help someone to resolve this quite strange and frustrating issue.

Good luck in bug hunting!



Hunting down TestFlight bugs



Recently, I went into one of the most annoying issues, that I’ve had on my work so far. Basically, problem was following: we’re about to release new application to the store. For this purpose, we prepared build for TestFlight and submitted it to for the external testing.

And then – KABOOM – testers (multiple of them) reported that the application is crashing literally on the startup. So, user launches the application and it instantly crashes and closes itself.

After almost week of error and trials, I managed to resolve issue. And purpose of this post is to share this experience, so, maybe, someone will resolve own issue quicker then I did.

Getting to work

So, first of all, – make sure that you’re not able to reproduce this crash itself:

  1. Edit scheme -> change configuration to “Release”
  2. Run application
  3. Try to find out crash

Ok, if you was able to reproduce the crash, great, you can fix this and skip rest of this post.

In my case, I wasn’t able to get away so easily.

Well, this didn’t work – what we’re going to do next? We’re going to find out crash logs from TestFlight.

Finding crashlogs from TestFlight

In case if iTunes Connect account is not yours and belongs to your company or client – grab credentials and add this account to your xCode:

  1. Click “Xcode” menu
  2. Choose “Preferences”
  3. Select tab “Accounts”
  4. Click “+” on the bottom
  5. Select “Add Apple ID”
  6. Enter credentials that you’ve grabbed

If everything is alright, you should be able to see crash logs from TestFlight now:

  1. Click menu “Window”
  2. Select “Organizer”
  3. Select tab “Crashes”
  4. Select build that was crashing
  5. Wait a second and list of crashes should appear

Ok, now you should have crashlogs. If you’re lucky enough – they’re already symbolicated for you (i.e. it’s not just addresses in memory, but actual names of functions in code).

If they’re actually not symbolicated you will see something like this:


I have removed name of project (since it’s project from my work), but it doesn’t really matter here. As you can see here, instead of function name and line in the code, there are only raw addresses in memory. It’s not symbolicated crash log and if you get something like this – next section will explain how to symbolicate it.


Well, I wasn’t that lucky, so I had to symbolicate it by myself. If you’re also not lucky, here how you can do it:

  1. Go to iTunes Connect
  2. Log in with credentials for application
  3. Click “My Apps”
  4. Select “TestFlight” tab
  5. Select “iOS” on left menu
  6. In list select build that was crashing
  7. Click “Download dSYM”

Ok, now you should have symbols that will help you to symbolicate crashlog. First of all, you have to get raw file of crash. In order to do so, do following:

  1. Return to Organizer to “Crashes” tab
  2. Right click on crash that you want to symbolicate
  3. Choose “Show in Finder”
  4. You should get Finder window, with file selected (extension: “xccrashpoint”)
  5. Right click on it and choose “Show Package Content”
  6. Navigate down on folders until you get to the folder containing “.crash” files

Basically, .crash file is a simple text file that contains all details related to crash. Copy any of this files to your working director. I would suggest to rename it into something more shorter, so it will be easier to write to Terminal. I used “temp.crash”.

Now you have to find matching dSYM for this crash. Open crash log and find following line: “Binary Images”. Right after this line you should have line with your application name in it and something like this “”. This is UUID which you can use to find out which dSYM corresponds to this crashlog. Image below depicts place (inside of black rectangle) where you can find it:


Navigate to folder with downloaded from iTunes Connect dSYMs and find here dSYM with UUID that was in crashlog. Note that names of dSYMs are in upper case and are separated by “-” (for image above correct dSYM file have name “503C13BD-BB1A-3885-ADB6-82CE15C90BF8.dSYM”). It might be hard to find out dSYM, but you should do it rather quickly. When you will find it – copy it to the working directory, where you copied .crash file previously. I would again suggest you to give it shorter name – I used “temp” for this.

Now the last step: symbolication itself.

First of all, you should find out exact path to “symbolicatecrash” on your system. You can do it in following way:

  1. Open Terminal
  2. Execute: “cd /Applications/”
  3. Execute: “find . -name symbolicatecrash”

You will get something like this: “./Contents/SharedFrameworks/DVTFoundation.framework/Versions/A/Resources/symbolicatecrash”.
Note that it is relative path. To make it absolute just replace dot at the start with:
so you will get:

Note that path may vary for different Xcode’s versions (path above for xCode 8) – so this method is more robust to find out exact path for your version.

Now with that path navigate to your working directory (where .crash and .dSYM files are located) and do following:

  1. First of all run: “export DEVELOPER_DIR=/Applications/”
  2. Then run: “/Applications/ (or path that you get above by “find”) temp.crash temp.dSYM > report.txt”

If everything is ok, after few seconds you should get symbolicated crash log (in file called “report.txt” in this case, you can replace it with any other name). Check it, you are searching for crashed thread. Analyze crash log and try to figure out what exactly went wrong.

Unfortunately, for me it wasn’t enough.

Magic crashed line

After symbolicating of crash log, I realized that it’s pointing to line that is not present in code. So how we are supposed to resolve such issue?

Basically, such problem may be related to optimizations. Solution here is to turn it off:


So, here what we’re going to do:

  1. Open your project settings in xCode (not target, top level)
  2. Open “Build Settings”
  3. Search for “Optimization”
  4. Turn to “None” Optimization Level for code generation (there are two places, for Swift compiler and Apple LLVM)
  5. Prepare new build for TestFlight and submit it
  6. Ask testers to reproduce crash

After testers will confirm that crash happened on new build – go to Organizer once again and find this new crash. Now, it should be way more verbose then in previous version. Symbolicate it and you should get all possible information related to crash.

Basically, turning off optimizations helped me to find out root cause of crash. Hopefully, it will do so for you.

Good luck with bug hunting!


  1. Tutorial by Christopher Hale on iOS crash symbolication

Building FFmpeg for Android




Some time ago, I got task on my work which required processing of video on Android. As you probably aware – Android doesn’t deliver  built-in tool for such task (Ok-ok, there actually is MediaCodec, which, in a way, allows you to perform video processing, but about it in the next post). After some googe’ing, I came to conclusion that FFmpeg ideally fits requirements of the task. Those requirements, by the way, were following: application had to trim video down to 30 seconds, lower its bitrate and crop it to square. And everything in time less then 30 seconds.

“Well, now all we need is to build FFmpeg and link it to the application. Like shooting fish in a barrel!”, – thought me.

“Good luck!”, – said FFmpeg:toll_face

For the next two weeks I was fighting with NDK, with FFmpeg’s configure, with linking and other stuff. It wasn’t easy to pull everything together, but at the end I managed to build all necessary .so files for all architectures, that application might need.

In the next sections, I will try to step-by-step explain how to build FFmpeg for Android, but, first of all, I would like to present developed by me Kit that should make this process much easier.

FFmpeg Development Kit

In order to make things easier, sometime ago I’ve prepared special Kit, which should make process of preparing .so libraries for Android much easier.

You can found this Kit on my GitHub here: FFmpeg Development Kit

All setup is described in Readme – if you don’t really want to know how it’s working inside and just want to get .so as soon as possible – you can skip the rest of this post and go straight to the Kit. If you get any problems with that – make sure to throw comment here or issue on repo.


Another library, prepared by me, that you might find useful: VideoKit, basically, is a result of steps described in this article – it allows you to execute standard FFmpeg commands to process video file.

You can find it here: VideoKit

Getting to work

First of all, you have to decide in which way you want to embed FFmpeg into your application. I know three ways to do so:

  1. Using precompiled binary with FFmpeg and then executing this with Runtime.getRuntime().exec(“command”). Not really clean way, and I would recommend to don’t use this.
  2. Building FFmpeg as .so libraries and then executing it’s main from your own code. Quite clean way, that allows you to add appropriate checks and write JNI. However, note that you will have to write some C/C++ code for JNI (note, that NDK doesn’t have all modern features of C++) and you still basically will execute FFmpeg in command-line style.
  3. Use FFmpeg as library and write completely own filters in pure C. I would say the most cleanest way to embed FFmpeg as well as the most hardest. I would highly recommend to don’t go that way, unless you have 3-4 months of completely free time to dig into documentation and code.

In the rest of this post I will try to explain how to embed FFmpeg with second way, since from my point of view it’s the best way to achieve necessary functionality.


You will need following components:

  • FFmpeg sources (I used FFmpeg 3.2.4)
  • Android NDK (I used 13r-b)
  • Patience

I was able to build FFmpeg on OSX (Sierra) and Ubuntu (12.04). While, theoretically, it should be possible to build FFmpeg in Windows with Cygwin, I highly would recommend to don’t go that way. Even if you don’t have Mac and using only Windows OS for development – consider installing Ubuntu as second system or in virtual environment and build FFmpeg in it. As per my experience, it will avoid many hours of frustration and weird errors happening all around you.

After you get everything downloaded – extract NDK somewhere on the disc. Then put FFmpeg sources under NDK/sources path. Note, this is very important for building your own JNI interface later on.


In next sections few terms may appear, that might not be known to reader, who didn’t work with gcc and building of open source libraries previously, in general.

There is a list of such terms with short explanation:

  • Toolchain – tools that are used for compiling and building of sources.
  • Sysroot – directory in which compiler will search for system headers and libraries.
  • Prefix – directory in which result of building will be written to.
  • Cross-prefix – directory of compiler to be used.
  • ABI – architecture of processor (i.e. x86, arm-v6, armv8-64 and so om).
  • CFLAGS – flags for C-compiler.
  • LDFLAGS – flags for linker. 


First step for FFmpeg building is configuring it. FFmpeg, thru special script called “configure”, allows you to choose which features you need, for which architecture you’re going and so on.

For example, let me present configuration for armeabi (arm-v5):

./configure –prefix=$(pwd)/android/arm
--extra-cflags="-O3 -Wall -pipe -std=c99 -ffast-math -fstrict-aliasing -Werror=strict-aliasing -Wno-psabi -Wa,--noexecstack -DANDROID -DNDEBUG-march=armv5te -mtune=arm9tdmi -msoft-float"

It looks a bit messy, but, unfortunately, it how it looks like in real world.

You can run:

./configure -h

to get full list of available components and flags that might be used.

Building .so files

If you configured everything properly, you should be able to run following commands:

make clean
make -j2 (change two to number of cores that you want to use)
make install

Be patient – building may take a while to end. If everything is good, you should find out .so files in folder which was specified in –prefix parameter of configure.


On some systems FFmpeg is adding version code to the end of .so file, so you might get something like this at the end:

While it’s ok for usage on desktop systems like OSX or Ubunty or any other – Android will not accept such libraries. In order to remove versioning you have to edit configure script.

Open configure with any text editor and find following lines:


And change to $(SLIBNAME), so you get:


This will turn off versioning and should give you standard .so files.

NDK module

I’m assuming that if you’re reading this, you have prepared all necessary .so files and we can go further.

Before we get to ndk-build we have to define module with .so libraries, that NDK will recognize and will be able to use. Definition of module is actually rather easy part: in top-level catalog (i.e. where include and lib folders are located) you have to add file called with following content inside it:

LOCAL_PATH:= $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE:=	<libname>

This file will tell NDK from where it have to take headers and library files.

JNI interface

This is last thing you have to do before you actually can run ndk-build and get necessary .so files ready to go.

JNI stands for Java Native Interface and it’s basically a bridge between Java code and native C code. Note that it’s not entirely usual Java and C code – it have to follow certain conventions to make everything work together. Let’s start from Java code.

Java part

It’s pretty usual, beside fact that you have to pay attention to package, in which class is located and also it must contain special functions marked with keyword “native”.

Lets consider class named D that is located in package a.b.c and that have native function called “run”:

public class D {
    public native int run(int loglevel, String[] args);

This class is similar to interface in a way that native functions doesn’t require implementation. When you will call this function, implementation in C counterpart of code actually will be executed.

Beside special functions – it’s normal class that might contain arbitrary Java code.

This pretty much it as for Java part. Let me present C part of the code.

C code

Your C code that actually uses FFmpeg must match your Java interface. Basically, for class defined above – C part would look as follow:

#include <jni.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>

JavaVM *sVm = NULL;

int main(int level, int argc, char **argv); //Fast forward for FFmpeg main

jint JNI_OnLoad(JavaVM* vm, void* reserved) {
    sVm = vm;
    return JNI_VERSION_1_6;

//Counter part for function "run" in class D
JNIEXPORT jint JNICALL Java_a_b_c_D_run(JNIEnv *env, jobject obj, jint loglevel, jobjectArray args) {


Looks a bit scary, but if you look more closer – there is nothing special about it. Function name encodes location of class and function in Java part of code (it’s why you should choose package carefully).

When you call “run” in your Java code, “Java_a_b_c_D_run” actually will be called in C part.

Beside some naming conventions – there is no restrictions on C code as well. You can even use C++, however, I should aware you that support of C++ on Android is not full (it’s partially support C++11 standard, if I remember correctly).


This is pretty much last step and after this you’re free to go. Before you run “ndk-build” command you must provide two files – and It’s better to locate those files in the same folder in which your .c files are located. is responsible for defining all paths and pulling everything together. Example is below:

LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := videokit //name of produced lib
LOCAL_LDLIBS := -llog -ljnigraphics -lz -landroid //standard libs
LOCAL_CFLAGS := -Wdeprecated-declarations //cflags
ANDROID_LIB := -landroid
LOCAL_CFLAGS := -I$(NDK)/sources/ffmpeg // include path
LOCAL_SRC_FILES := videokit.c ffmpeg.c ffmpeg_filter.c ffmpeg_opt.c cmdutils.c // source files to compile
LOCAL_SHARED_LIBRARIES := libavformat libavcodec libswscale libavutil libswresample libavfilter libavdevice // linked libraries

$(call import-module,ffmpeg/android/$(CPU)) // path to NDK module relative to NDK/sources/ defines general configuration of produced library:

APP_OPTIM := release //optimization level
APP_PLATFORM := $(PLATFORM) //platform level
APP_ABI := $(ABI) //ABI level
NDK_TOOLCHAIN_VERSION=4.9 //Version of toolchain used
APP_PIE := false //If "pie" will be used
APP_STL := stlport_shared // C++ STL version

APP_CFLAGS := -O3 -Wall -pipe \
-ffast-math \
-fstrict-aliasing -Werror=strict-aliasing \
-Wno-psabi -Wa,--noexecstack \
-DANDROID -DNDEBUG // Global c-flags

When both files are prepared and configured – navigate to this folder in command line and run ndk-build. Make sure that NDK folder is added to the PATH.

If everything was configured properly – you should get your library and copied FFmpeg libraries in libs folder.

Libraries loading

After you got everything prepared you must load libraries in memory of your application. First of all, make sure that libraries located in right folder in the project. It have to be in  /src/main/jniLibs//. If you will not put it there – Android system will not be able to locate libraries.

Loading is rather easy step, but it may have some pitfalls in  it. Basically, it may look as follows:

static {
    try {

    } catch (UnsatisfiedLinkError e) {


Try-catch construction is rather optional. But may safe your application from unexpected crash on new architecture or unexpected architecture.

Important note: order matters. If library will not find its dependency already loaded in memory – it will crash. So you have to load first library with no dependencies, then library that depends on first and so on.


If you survived up to this point – you successfully embedded FFmpeg into your application. I know that it’s hard work and sincerely congratulate you with it.

If you, unfortunately, didn’t achieve this goal – you always can ask for help in the comment section and I will try my best to help you.

References and acknowledgements

  1. Excellent tutorial, unfortunately not updated for recent versions
  2. Thread on forum, explaining version system of FFmpeg